Test Report: Docker_Linux_crio_arm64 21664

                    
                      fca5789b7681da792c5737c174f2f0168409bc21:2025-10-17:41948
                    
                

Test fail (39/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.66
35 TestAddons/parallel/Registry 15.6
36 TestAddons/parallel/RegistryCreds 0.6
37 TestAddons/parallel/Ingress 145.5
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 6.49
41 TestAddons/parallel/CSI 41.3
42 TestAddons/parallel/Headlamp 3.12
43 TestAddons/parallel/CloudSpanner 5.38
44 TestAddons/parallel/LocalPath 11.4
45 TestAddons/parallel/NvidiaDevicePlugin 5.3
46 TestAddons/parallel/Yakd 6.27
98 TestFunctional/parallel/ServiceCmdConnect 603.55
126 TestFunctional/parallel/ServiceCmd/DeployApp 600.87
135 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
136 TestFunctional/parallel/ServiceCmd/Format 0.5
137 TestFunctional/parallel/ServiceCmd/URL 0.46
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.08
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
152 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.43
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
174 TestMultiControlPlane/serial/RestartClusterKeepsNodes 543.92
175 TestMultiControlPlane/serial/DeleteSecondaryNode 8.84
176 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 3.33
191 TestJSONOutput/pause/Command 1.91
197 TestJSONOutput/unpause/Command 2
282 TestPause/serial/Pause 9.1
343 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.99
348 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.5
355 TestStartStop/group/old-k8s-version/serial/Pause 9.38
361 TestStartStop/group/no-preload/serial/Pause 7.18
365 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.57
370 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.15
377 TestStartStop/group/embed-certs/serial/Pause 8.42
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 8.03
385 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 3.05
392 TestStartStop/group/newest-cni/serial/Pause 6.08
x
+
TestAddons/serial/Volcano (0.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable volcano --alsologtostderr -v=1: exit status 11 (663.759891ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:21.231987  592874 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:21.232829  592874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:21.232849  592874 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:21.232855  592874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:21.233159  592874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:21.233495  592874 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:21.233886  592874 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:21.233908  592874 addons.go:606] checking whether the cluster is paused
	I1017 20:00:21.234012  592874 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:21.234033  592874 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:21.234490  592874 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:21.255436  592874 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:21.255510  592874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:21.272314  592874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:21.377696  592874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:21.377811  592874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:21.421768  592874 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:21.421838  592874 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:21.421857  592874 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:21.421862  592874 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:21.421866  592874 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:21.421870  592874 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:21.421874  592874 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:21.421877  592874 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:21.421880  592874 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:21.421895  592874 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:21.421916  592874 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:21.421921  592874 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:21.421936  592874 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:21.421947  592874 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:21.421951  592874 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:21.421955  592874 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:21.421959  592874 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:21.421963  592874 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:21.421966  592874 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:21.421969  592874 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:21.421974  592874 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:21.421977  592874 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:21.421980  592874 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:21.421983  592874 cri.go:89] found id: ""
	I1017 20:00:21.422053  592874 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:21.437676  592874 out.go:203] 
	W1017 20:00:21.440605  592874 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:21.440635  592874 out.go:285] * 
	* 
	W1017 20:00:21.789298  592874 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:21.792293  592874 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.66s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.6s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 10.906067ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.010432199s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.004268432s
addons_test.go:392: (dbg) Run:  kubectl --context addons-948763 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-948763 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-948763 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.96132022s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 ip
2025/10/17 20:00:47 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable registry --alsologtostderr -v=1: exit status 11 (328.027332ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:47.449398  593813 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:47.450273  593813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:47.450306  593813 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:47.450326  593813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:47.450608  593813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:47.450990  593813 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:47.451843  593813 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:47.451898  593813 addons.go:606] checking whether the cluster is paused
	I1017 20:00:47.452077  593813 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:47.452126  593813 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:47.452728  593813 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:47.472037  593813 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:47.472095  593813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:47.502825  593813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:47.618053  593813 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:47.618165  593813 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:47.652262  593813 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:47.652299  593813 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:47.652306  593813 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:47.652310  593813 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:47.652314  593813 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:47.652323  593813 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:47.652327  593813 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:47.652330  593813 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:47.652340  593813 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:47.652351  593813 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:47.652355  593813 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:47.652358  593813 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:47.652361  593813 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:47.652376  593813 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:47.652380  593813 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:47.652386  593813 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:47.652392  593813 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:47.652397  593813 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:47.652400  593813 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:47.652403  593813 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:47.652413  593813 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:47.652420  593813 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:47.652423  593813 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:47.652426  593813 cri.go:89] found id: ""
	I1017 20:00:47.652482  593813 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:47.673661  593813 out.go:203] 
	W1017 20:00:47.676632  593813 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:47.676666  593813 out.go:285] * 
	* 
	W1017 20:00:47.691172  593813 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:47.693274  593813 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.60s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.6s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.017589ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-948763
addons_test.go:332: (dbg) Run:  kubectl --context addons-948763 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (311.358837ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:16.877474  594880 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:16.878875  594880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.878933  594880 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:16.878955  594880 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.879397  594880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:16.879756  594880 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:16.880200  594880 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.880234  594880 addons.go:606] checking whether the cluster is paused
	I1017 20:01:16.880363  594880 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.880403  594880 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:16.880891  594880 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:16.914524  594880 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:16.914587  594880 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:16.938727  594880 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:17.045701  594880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:17.045801  594880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:17.080270  594880 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:17.080291  594880 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:17.080296  594880 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:17.080311  594880 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:17.080315  594880 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:17.080319  594880 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:17.080340  594880 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:17.080348  594880 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:17.080351  594880 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:17.080358  594880 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:17.080364  594880 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:17.080367  594880 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:17.080376  594880 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:17.080400  594880 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:17.080403  594880 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:17.080408  594880 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:17.080426  594880 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:17.080432  594880 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:17.080436  594880 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:17.080439  594880 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:17.080444  594880 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:17.080447  594880 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:17.080451  594880 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:17.080467  594880 cri.go:89] found id: ""
	I1017 20:01:17.080552  594880 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:17.097833  594880 out.go:203] 
	W1017 20:01:17.101032  594880 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:17.101060  594880 out.go:285] * 
	* 
	W1017 20:01:17.108337  594880 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:17.111675  594880 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.60s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.5s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-948763 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-948763 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-948763 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3e166e20-d4cf-4e36-ba7e-f133dc83318e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3e166e20-d4cf-4e36-ba7e-f133dc83318e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003190328s
I1017 20:01:10.380626  586172 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.32431749s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-948763 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-948763
helpers_test.go:243: (dbg) docker inspect addons-948763:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440",
	        "Created": "2025-10-17T19:57:52.38390509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:57:52.446314301Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/hosts",
	        "LogPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440-json.log",
	        "Name": "/addons-948763",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-948763:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-948763",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440",
	                "LowerDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-948763",
	                "Source": "/var/lib/docker/volumes/addons-948763/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-948763",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-948763",
	                "name.minikube.sigs.k8s.io": "addons-948763",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8edc7c9f4b4d958807db6a9119427afa05a32700103e91267047f8f774543c65",
	            "SandboxKey": "/var/run/docker/netns/8edc7c9f4b4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-948763": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e3:48:f3:fb:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6c4d40919db09851872993f602342c89bd57e0bb2321056f6e797ba7ad60426",
	                    "EndpointID": "042e0d6b37d851c8e012c5c4fe0ed0edb994b9afb914ad39c3400941c2e92be0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-948763",
	                        "5d47ee6e89dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-948763 -n addons-948763
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-948763 logs -n 25: (1.524583421s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-785685                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-785685 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ --download-only -p binary-mirror-465085 --alsologtostderr --binary-mirror http://127.0.0.1:39883 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-465085   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ -p binary-mirror-465085                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-465085   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p addons-948763                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-948763                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ start   │ -p addons-948763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 20:00 UTC │
	│ addons  │ addons-948763 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ addons-948763 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-948763 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ addons-948763 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ ip      │ addons-948763 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │ 17 Oct 25 20:00 UTC │
	│ addons  │ addons-948763 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ addons-948763 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ addons-948763 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ ssh     │ addons-948763 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-948763 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-948763 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-948763                                                                                                                                                                                                                                                                                                                                                                                           │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ addons  │ addons-948763 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ssh     │ addons-948763 ssh cat /opt/local-path-provisioner/pvc-75e5d985-dd82-4e2e-bc28-1cc76f6e0618_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │ 17 Oct 25 20:01 UTC │
	│ addons  │ addons-948763 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-948763 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-948763 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ addons  │ addons-948763 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:01 UTC │                     │
	│ ip      │ addons-948763 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:03 UTC │ 17 Oct 25 20:03 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:57:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:57:25.708417  586929 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:57:25.708534  586929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:25.708543  586929 out.go:374] Setting ErrFile to fd 2...
	I1017 19:57:25.708549  586929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:25.708805  586929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 19:57:25.709290  586929 out.go:368] Setting JSON to false
	I1017 19:57:25.710149  586929 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9592,"bootTime":1760721454,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 19:57:25.710217  586929 start.go:141] virtualization:  
	I1017 19:57:25.715299  586929 out.go:179] * [addons-948763] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:57:25.718340  586929 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:57:25.718431  586929 notify.go:220] Checking for updates...
	I1017 19:57:25.724023  586929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:57:25.726932  586929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:57:25.729762  586929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 19:57:25.732487  586929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:57:25.735366  586929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:57:25.738524  586929 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:57:25.762378  586929 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:57:25.762519  586929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:25.836686  586929 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 19:57:25.827835294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:25.836793  586929 docker.go:318] overlay module found
	I1017 19:57:25.839890  586929 out.go:179] * Using the docker driver based on user configuration
	I1017 19:57:25.842642  586929 start.go:305] selected driver: docker
	I1017 19:57:25.842658  586929 start.go:925] validating driver "docker" against <nil>
	I1017 19:57:25.842673  586929 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:57:25.843449  586929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:25.896195  586929 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 19:57:25.886727968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:25.896347  586929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:57:25.896582  586929 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:57:25.899495  586929 out.go:179] * Using Docker driver with root privileges
	I1017 19:57:25.902349  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:57:25.902415  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:57:25.902427  586929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:57:25.902506  586929 start.go:349] cluster config:
	{Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 19:57:25.905521  586929 out.go:179] * Starting "addons-948763" primary control-plane node in "addons-948763" cluster
	I1017 19:57:25.908290  586929 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:57:25.911243  586929 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:57:25.914053  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:25.914105  586929 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:57:25.914119  586929 cache.go:58] Caching tarball of preloaded images
	I1017 19:57:25.914148  586929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:57:25.914202  586929 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:57:25.914212  586929 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:57:25.914546  586929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json ...
	I1017 19:57:25.914577  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json: {Name:mk6fbbe992c885173d02c12fb732ce7886450d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:25.930173  586929 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:57:25.930322  586929 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:57:25.930342  586929 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 19:57:25.930347  586929 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 19:57:25.930361  586929 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 19:57:25.930366  586929 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 19:57:43.981774  586929 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 19:57:43.981817  586929 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:57:43.981848  586929 start.go:360] acquireMachinesLock for addons-948763: {Name:mk68e71e96d7a5ca2beb265f792c62d71f65313a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:57:43.982507  586929 start.go:364] duration metric: took 633.174µs to acquireMachinesLock for "addons-948763"
	I1017 19:57:43.982550  586929 start.go:93] Provisioning new machine with config: &{Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:57:43.982652  586929 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:57:43.986131  586929 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 19:57:43.986379  586929 start.go:159] libmachine.API.Create for "addons-948763" (driver="docker")
	I1017 19:57:43.986438  586929 client.go:168] LocalClient.Create starting
	I1017 19:57:43.986563  586929 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 19:57:45.470574  586929 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 19:57:45.610855  586929 cli_runner.go:164] Run: docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:57:45.628013  586929 cli_runner.go:211] docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:57:45.628104  586929 network_create.go:284] running [docker network inspect addons-948763] to gather additional debugging logs...
	I1017 19:57:45.628136  586929 cli_runner.go:164] Run: docker network inspect addons-948763
	W1017 19:57:45.643559  586929 cli_runner.go:211] docker network inspect addons-948763 returned with exit code 1
	I1017 19:57:45.643589  586929 network_create.go:287] error running [docker network inspect addons-948763]: docker network inspect addons-948763: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-948763 not found
	I1017 19:57:45.643602  586929 network_create.go:289] output of [docker network inspect addons-948763]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-948763 not found
	
	** /stderr **
	I1017 19:57:45.643703  586929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:57:45.659514  586929 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d41570}
	I1017 19:57:45.659556  586929 network_create.go:124] attempt to create docker network addons-948763 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 19:57:45.659620  586929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-948763 addons-948763
	I1017 19:57:45.718108  586929 network_create.go:108] docker network addons-948763 192.168.49.0/24 created
	I1017 19:57:45.718144  586929 kic.go:121] calculated static IP "192.168.49.2" for the "addons-948763" container
	I1017 19:57:45.718231  586929 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:57:45.737126  586929 cli_runner.go:164] Run: docker volume create addons-948763 --label name.minikube.sigs.k8s.io=addons-948763 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:57:45.755786  586929 oci.go:103] Successfully created a docker volume addons-948763
	I1017 19:57:45.755877  586929 cli_runner.go:164] Run: docker run --rm --name addons-948763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --entrypoint /usr/bin/test -v addons-948763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:57:47.900151  586929 cli_runner.go:217] Completed: docker run --rm --name addons-948763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --entrypoint /usr/bin/test -v addons-948763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.14421258s)
	I1017 19:57:47.900186  586929 oci.go:107] Successfully prepared a docker volume addons-948763
	I1017 19:57:47.900207  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:47.900226  586929 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:57:47.900296  586929 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-948763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 19:57:52.313822  586929 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-948763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413484379s)
	I1017 19:57:52.313853  586929 kic.go:203] duration metric: took 4.413624583s to extract preloaded images to volume ...
	W1017 19:57:52.313986  586929 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 19:57:52.314094  586929 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:57:52.368818  586929 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-948763 --name addons-948763 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-948763 --network addons-948763 --ip 192.168.49.2 --volume addons-948763:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:57:52.641407  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Running}}
	I1017 19:57:52.661063  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:52.687149  586929 cli_runner.go:164] Run: docker exec addons-948763 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:57:52.739807  586929 oci.go:144] the created container "addons-948763" has a running status.
	I1017 19:57:52.739836  586929 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa...
	I1017 19:57:53.302878  586929 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:57:53.325169  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:53.347185  586929 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:57:53.347206  586929 kic_runner.go:114] Args: [docker exec --privileged addons-948763 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:57:53.399407  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:53.435368  586929 machine.go:93] provisionDockerMachine start ...
	I1017 19:57:53.435472  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.464716  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.465037  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.465051  586929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:57:53.646964  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-948763
	
	I1017 19:57:53.646987  586929 ubuntu.go:182] provisioning hostname "addons-948763"
	I1017 19:57:53.647052  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.670147  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.670453  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.670467  586929 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-948763 && echo "addons-948763" | sudo tee /etc/hostname
	I1017 19:57:53.838317  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-948763
	
	I1017 19:57:53.838404  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.856599  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.856911  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.856932  586929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-948763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-948763/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-948763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:57:54.012471  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:57:54.012500  586929 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 19:57:54.012547  586929 ubuntu.go:190] setting up certificates
	I1017 19:57:54.012558  586929 provision.go:84] configureAuth start
	I1017 19:57:54.012628  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.030502  586929 provision.go:143] copyHostCerts
	I1017 19:57:54.030606  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 19:57:54.030737  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 19:57:54.030797  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 19:57:54.030849  586929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.addons-948763 san=[127.0.0.1 192.168.49.2 addons-948763 localhost minikube]
	I1017 19:57:54.229175  586929 provision.go:177] copyRemoteCerts
	I1017 19:57:54.229244  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:57:54.229284  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.246366  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.350918  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:57:54.368528  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:57:54.385836  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:57:54.404762  586929 provision.go:87] duration metric: took 392.177291ms to configureAuth
	I1017 19:57:54.404791  586929 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:57:54.405008  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:57:54.405120  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.422221  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:54.422534  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:54.422557  586929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:57:54.681007  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:57:54.681072  586929 machine.go:96] duration metric: took 1.245682786s to provisionDockerMachine
	I1017 19:57:54.681097  586929 client.go:171] duration metric: took 10.694646314s to LocalClient.Create
	I1017 19:57:54.681129  586929 start.go:167] duration metric: took 10.694750676s to libmachine.API.Create "addons-948763"
	I1017 19:57:54.681167  586929 start.go:293] postStartSetup for "addons-948763" (driver="docker")
	I1017 19:57:54.681192  586929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:57:54.681300  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:57:54.681414  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.697811  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.798991  586929 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:57:54.802126  586929 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:57:54.802153  586929 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:57:54.802164  586929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 19:57:54.802232  586929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 19:57:54.802261  586929 start.go:296] duration metric: took 121.073817ms for postStartSetup
	I1017 19:57:54.802574  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.819935  586929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json ...
	I1017 19:57:54.820203  586929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:57:54.820252  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.839513  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.940177  586929 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:57:54.945018  586929 start.go:128] duration metric: took 10.96235005s to createHost
	I1017 19:57:54.945040  586929 start.go:83] releasing machines lock for "addons-948763", held for 10.962512777s
	I1017 19:57:54.945110  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.965307  586929 ssh_runner.go:195] Run: cat /version.json
	I1017 19:57:54.965334  586929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:57:54.965361  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.965397  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.983334  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.992859  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:55.194067  586929 ssh_runner.go:195] Run: systemctl --version
	I1017 19:57:55.200181  586929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:57:55.234975  586929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:57:55.239147  586929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:57:55.239270  586929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:57:55.266110  586929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 19:57:55.266197  586929 start.go:495] detecting cgroup driver to use...
	I1017 19:57:55.266244  586929 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:57:55.266319  586929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:57:55.283076  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:57:55.295253  586929 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:57:55.295339  586929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:57:55.312297  586929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:57:55.331465  586929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:57:55.449400  586929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:57:55.579257  586929 docker.go:234] disabling docker service ...
	I1017 19:57:55.579353  586929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:57:55.599908  586929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:57:55.612975  586929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:57:55.723423  586929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:57:55.846699  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:57:55.859307  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:57:55.872860  586929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:57:55.872934  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.881564  586929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:57:55.881676  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.891070  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.900301  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.909690  586929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:57:55.918467  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.927594  586929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.941060  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.949741  586929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:57:55.957161  586929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:57:55.964345  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:57:56.076303  586929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:57:56.201147  586929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:57:56.201289  586929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:57:56.205067  586929 start.go:563] Will wait 60s for crictl version
	I1017 19:57:56.205176  586929 ssh_runner.go:195] Run: which crictl
	I1017 19:57:56.208507  586929 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:57:56.232904  586929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:57:56.233050  586929 ssh_runner.go:195] Run: crio --version
	I1017 19:57:56.262059  586929 ssh_runner.go:195] Run: crio --version
	I1017 19:57:56.293819  586929 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:57:56.296660  586929 cli_runner.go:164] Run: docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:57:56.311224  586929 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:57:56.314706  586929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:57:56.324027  586929 kubeadm.go:883] updating cluster {Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:57:56.324150  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:56.324213  586929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:57:56.357103  586929 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:57:56.357128  586929 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:57:56.357188  586929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:57:56.382126  586929 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:57:56.382148  586929 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:57:56.382156  586929 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:57:56.382286  586929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-948763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:57:56.382384  586929 ssh_runner.go:195] Run: crio config
	I1017 19:57:56.433873  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:57:56.433897  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:57:56.433916  586929 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:57:56.433967  586929 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-948763 NodeName:addons-948763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:57:56.434187  586929 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-948763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:57:56.434287  586929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:57:56.442274  586929 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:57:56.442386  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:57:56.450109  586929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:57:56.463517  586929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:57:56.475842  586929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 19:57:56.489036  586929 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:57:56.492701  586929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:57:56.502450  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:57:56.614617  586929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:57:56.636901  586929 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763 for IP: 192.168.49.2
	I1017 19:57:56.636970  586929 certs.go:195] generating shared ca certs ...
	I1017 19:57:56.637004  586929 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:56.637188  586929 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 19:57:57.401074  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt ...
	I1017 19:57:57.401108  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt: {Name:mk2284f82e0c9b99696c8a1614a44d6a7619b033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.401320  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key ...
	I1017 19:57:57.401334  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key: {Name:mkab7b9ac9299104fd96211f14f1d513b7f9d51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.402055  586929 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 19:57:57.830396  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt ...
	I1017 19:57:57.830432  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt: {Name:mk761e132cc40987111a33bc312c624c4a89dd04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.831274  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key ...
	I1017 19:57:57.831291  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key: {Name:mk1adffd7f636109c810327716c0450bc669be52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.831943  586929 certs.go:257] generating profile certs ...
	I1017 19:57:57.832008  586929 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key
	I1017 19:57:57.832025  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt with IP's: []
	I1017 19:57:58.495006  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt ...
	I1017 19:57:58.495038  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: {Name:mk96d9941e1d00385c50ab5d03c51c54ebdddb8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:58.495235  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key ...
	I1017 19:57:58.495247  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key: {Name:mk2cabde2d08f0807e2862c2336fc0029f2db9c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:58.495349  586929 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b
	I1017 19:57:58.495369  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 19:57:59.195244  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b ...
	I1017 19:57:59.195276  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b: {Name:mk01618b660bda5b796ad0f6a57510890dc176a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.195451  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b ...
	I1017 19:57:59.195465  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b: {Name:mk73a701fd2daa299da09e2f89352133a36098b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.195550  586929 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt
	I1017 19:57:59.195635  586929 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key
	I1017 19:57:59.195688  586929 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key
	I1017 19:57:59.195703  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt with IP's: []
	I1017 19:57:59.686172  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt ...
	I1017 19:57:59.686204  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt: {Name:mk28f3e286380b2d4ab16eb013ce090dbea224be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.686386  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key ...
	I1017 19:57:59.686404  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key: {Name:mkbe8ee3c63df4d589cace96d9bb321c55126e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.686608  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:57:59.686656  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:57:59.686688  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:57:59.686725  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 19:57:59.687370  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:57:59.716100  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 19:57:59.740064  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:57:59.757780  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:57:59.775490  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:57:59.792592  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:57:59.809919  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:57:59.829095  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:57:59.846611  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:57:59.863825  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:57:59.876942  586929 ssh_runner.go:195] Run: openssl version
	I1017 19:57:59.883017  586929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:57:59.891500  586929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.895188  586929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.895260  586929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.940797  586929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:57:59.949177  586929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:57:59.952860  586929 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:57:59.952958  586929 kubeadm.go:400] StartCluster: {Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:57:59.953047  586929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:57:59.953104  586929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:57:59.982804  586929 cri.go:89] found id: ""
	I1017 19:57:59.982887  586929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:57:59.991051  586929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:57:59.998750  586929 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:57:59.998819  586929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:58:00.013977  586929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:58:00.013997  586929 kubeadm.go:157] found existing configuration files:
	
	I1017 19:58:00.014060  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 19:58:00.053893  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:58:00.053975  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:58:00.085185  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 19:58:00.121301  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:58:00.123280  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:58:00.152126  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 19:58:00.178258  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:58:00.178358  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:58:00.196846  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 19:58:00.208420  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:58:00.208531  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:58:00.243422  586929 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:58:00.333539  586929 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:58:00.334052  586929 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:58:00.389055  586929 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:58:00.389130  586929 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 19:58:00.389168  586929 kubeadm.go:318] OS: Linux
	I1017 19:58:00.389218  586929 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:58:00.389269  586929 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 19:58:00.389320  586929 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:58:00.389371  586929 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:58:00.389433  586929 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:58:00.389489  586929 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:58:00.389538  586929 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:58:00.389589  586929 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:58:00.389642  586929 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 19:58:00.483846  586929 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:58:00.483996  586929 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:58:00.484100  586929 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:58:00.494920  586929 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:58:00.501762  586929 out.go:252]   - Generating certificates and keys ...
	I1017 19:58:00.501882  586929 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:58:00.501965  586929 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:58:01.730273  586929 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:58:01.930112  586929 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:58:02.488393  586929 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:58:03.378158  586929 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:58:03.784393  586929 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:58:03.784571  586929 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-948763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:58:04.260196  586929 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:58:04.260493  586929 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-948763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:58:04.656507  586929 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:58:05.129321  586929 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:58:05.703681  586929 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:58:05.703968  586929 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:58:06.027830  586929 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:58:06.251642  586929 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:58:06.587003  586929 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:58:07.654619  586929 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:58:07.735603  586929 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:58:07.736202  586929 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:58:07.738758  586929 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:58:07.742127  586929 out.go:252]   - Booting up control plane ...
	I1017 19:58:07.742226  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:58:07.742307  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:58:07.742386  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:58:07.756649  586929 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:58:07.756955  586929 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:58:07.764413  586929 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:58:07.764683  586929 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:58:07.764871  586929 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:58:07.898927  586929 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:58:07.899058  586929 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:58:09.402917  586929 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.504039545s
	I1017 19:58:09.406409  586929 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:58:09.406514  586929 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 19:58:09.406770  586929 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:58:09.406866  586929 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:58:13.031065  586929 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.62421357s
	I1017 19:58:14.786817  586929 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.380332439s
	I1017 19:58:15.909100  586929 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502439573s
	I1017 19:58:15.929968  586929 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:58:15.943398  586929 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:58:15.961417  586929 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:58:15.961652  586929 kubeadm.go:318] [mark-control-plane] Marking the node addons-948763 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:58:15.973971  586929 kubeadm.go:318] [bootstrap-token] Using token: lbpa4m.5ssgzkitrp191svg
	I1017 19:58:15.977116  586929 out.go:252]   - Configuring RBAC rules ...
	I1017 19:58:15.977274  586929 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:58:15.983811  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:58:15.994655  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:58:15.999937  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:58:16.007767  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:58:16.020786  586929 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:58:16.318378  586929 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:58:16.759558  586929 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:58:17.316428  586929 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:58:17.318999  586929 kubeadm.go:318] 
	I1017 19:58:17.319084  586929 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:58:17.319097  586929 kubeadm.go:318] 
	I1017 19:58:17.319212  586929 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:58:17.319224  586929 kubeadm.go:318] 
	I1017 19:58:17.319267  586929 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:58:17.319332  586929 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:58:17.319415  586929 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:58:17.319435  586929 kubeadm.go:318] 
	I1017 19:58:17.319502  586929 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:58:17.319511  586929 kubeadm.go:318] 
	I1017 19:58:17.319591  586929 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:58:17.319600  586929 kubeadm.go:318] 
	I1017 19:58:17.319655  586929 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:58:17.319748  586929 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:58:17.319831  586929 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:58:17.319840  586929 kubeadm.go:318] 
	I1017 19:58:17.319932  586929 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:58:17.320019  586929 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:58:17.320027  586929 kubeadm.go:318] 
	I1017 19:58:17.320135  586929 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lbpa4m.5ssgzkitrp191svg \
	I1017 19:58:17.320250  586929 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 19:58:17.320272  586929 kubeadm.go:318] 	--control-plane 
	I1017 19:58:17.320277  586929 kubeadm.go:318] 
	I1017 19:58:17.320366  586929 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:58:17.320371  586929 kubeadm.go:318] 
	I1017 19:58:17.320466  586929 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lbpa4m.5ssgzkitrp191svg \
	I1017 19:58:17.320574  586929 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 19:58:17.323632  586929 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 19:58:17.323900  586929 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 19:58:17.324033  586929 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:58:17.324047  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:58:17.324055  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:58:17.327224  586929 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:58:17.330207  586929 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:58:17.334792  586929 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:58:17.334813  586929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:58:17.349608  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:58:17.675142  586929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:58:17.675277  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:17.675371  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-948763 minikube.k8s.io/updated_at=2025_10_17T19_58_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=addons-948763 minikube.k8s.io/primary=true
	I1017 19:58:17.836779  586929 ops.go:34] apiserver oom_adj: -16
	I1017 19:58:17.836971  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:18.337548  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:18.837069  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:19.337155  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:19.837672  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:20.337067  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:20.837653  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.337114  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.837131  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.941228  586929 kubeadm.go:1113] duration metric: took 4.26599672s to wait for elevateKubeSystemPrivileges
	I1017 19:58:21.941257  586929 kubeadm.go:402] duration metric: took 21.988302535s to StartCluster
	I1017 19:58:21.941274  586929 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:58:21.941398  586929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:58:21.941777  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:58:21.941990  586929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:58:21.942177  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:58:21.942446  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:58:21.942554  586929 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 19:58:21.942633  586929 addons.go:69] Setting yakd=true in profile "addons-948763"
	I1017 19:58:21.942646  586929 addons.go:238] Setting addon yakd=true in "addons-948763"
	I1017 19:58:21.942669  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.943214  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.943764  586929 addons.go:69] Setting metrics-server=true in profile "addons-948763"
	I1017 19:58:21.943798  586929 addons.go:238] Setting addon metrics-server=true in "addons-948763"
	I1017 19:58:21.943831  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.943859  586929 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-948763"
	I1017 19:58:21.943877  586929 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-948763"
	I1017 19:58:21.943908  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.944257  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.944294  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948612  586929 addons.go:69] Setting registry=true in profile "addons-948763"
	I1017 19:58:21.949125  586929 addons.go:238] Setting addon registry=true in "addons-948763"
	I1017 19:58:21.949231  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.948784  586929 addons.go:69] Setting registry-creds=true in profile "addons-948763"
	I1017 19:58:21.949990  586929 addons.go:238] Setting addon registry-creds=true in "addons-948763"
	I1017 19:58:21.950061  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.948798  586929 addons.go:69] Setting storage-provisioner=true in profile "addons-948763"
	I1017 19:58:21.951082  586929 addons.go:238] Setting addon storage-provisioner=true in "addons-948763"
	I1017 19:58:21.951120  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.951512  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.956052  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948808  586929 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-948763"
	I1017 19:58:21.959543  586929 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-948763"
	I1017 19:58:21.959961  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948815  586929 addons.go:69] Setting volcano=true in profile "addons-948763"
	I1017 19:58:21.960121  586929 addons.go:238] Setting addon volcano=true in "addons-948763"
	I1017 19:58:21.948822  586929 addons.go:69] Setting volumesnapshots=true in profile "addons-948763"
	I1017 19:58:21.949026  586929 out.go:179] * Verifying Kubernetes components...
	I1017 19:58:21.949041  586929 addons.go:69] Setting default-storageclass=true in profile "addons-948763"
	I1017 19:58:21.949049  586929 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-948763"
	I1017 19:58:21.949056  586929 addons.go:69] Setting cloud-spanner=true in profile "addons-948763"
	I1017 19:58:21.949062  586929 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-948763"
	I1017 19:58:21.949077  586929 addons.go:69] Setting ingress=true in profile "addons-948763"
	I1017 19:58:21.949083  586929 addons.go:69] Setting gcp-auth=true in profile "addons-948763"
	I1017 19:58:21.949089  586929 addons.go:69] Setting ingress-dns=true in profile "addons-948763"
	I1017 19:58:21.949101  586929 addons.go:69] Setting inspektor-gadget=true in profile "addons-948763"
	I1017 19:58:21.960188  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.987647  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.988242  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.988543  586929 addons.go:238] Setting addon ingress=true in "addons-948763"
	I1017 19:58:21.988604  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.989032  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.011320  586929 mustload.go:65] Loading cluster: addons-948763
	I1017 19:58:22.011666  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:58:22.012066  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.013452  586929 addons.go:238] Setting addon volumesnapshots=true in "addons-948763"
	I1017 19:58:22.013569  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.014067  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.029966  586929 addons.go:238] Setting addon ingress-dns=true in "addons-948763"
	I1017 19:58:22.030036  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.030493  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.041784  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:58:22.041919  586929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-948763"
	I1017 19:58:22.042249  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.047974  586929 addons.go:238] Setting addon inspektor-gadget=true in "addons-948763"
	I1017 19:58:22.048041  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.048638  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.051070  586929 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-948763"
	I1017 19:58:22.051139  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.051615  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.060927  586929 addons.go:238] Setting addon cloud-spanner=true in "addons-948763"
	I1017 19:58:22.061038  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.065714  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.074655  586929 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-948763"
	I1017 19:58:22.074745  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.075364  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.083015  586929 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 19:58:22.087573  586929 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:58:22.087639  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 19:58:22.087733  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.097397  586929 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 19:58:22.100737  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 19:58:22.100812  586929 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 19:58:22.100935  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.140813  586929 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 19:58:22.172753  586929 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 19:58:22.218313  586929 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 19:58:22.221819  586929 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 19:58:22.222037  586929 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 19:58:22.222084  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 19:58:22.222188  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.228223  586929 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:58:22.228304  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 19:58:22.228399  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.253920  586929 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:58:22.257159  586929 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:58:22.257228  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:58:22.257330  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.257523  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 19:58:22.260426  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 19:58:22.260490  586929 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 19:58:22.260600  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.275279  586929 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 19:58:22.275717  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:58:22.275944  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 19:58:22.275957  586929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 19:58:22.276025  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.282732  586929 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-948763"
	I1017 19:58:22.282847  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.283446  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.300036  586929 addons.go:238] Setting addon default-storageclass=true in "addons-948763"
	I1017 19:58:22.300124  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.300573  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.350184  586929 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:58:22.350240  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 19:58:22.350314  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.357385  586929 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 19:58:22.360308  586929 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 19:58:22.360337  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 19:58:22.360403  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.350130  586929 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 19:58:22.369499  586929 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:58:22.369522  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 19:58:22.369612  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.391000  586929 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 19:58:22.391757  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.394556  586929 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 19:58:22.394577  586929 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 19:58:22.394647  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.411953  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 19:58:22.414873  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	W1017 19:58:22.419421  586929 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 19:58:22.420337  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.424931  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:58:22.425055  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 19:58:22.428077  586929 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:58:22.428095  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 19:58:22.428162  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.441156  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 19:58:22.447630  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 19:58:22.450544  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 19:58:22.455941  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 19:58:22.460742  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 19:58:22.464881  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 19:58:22.468121  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.469066  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.472560  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 19:58:22.481607  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 19:58:22.481639  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 19:58:22.481705  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.485169  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.501611  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.537530  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.549164  586929 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 19:58:22.554234  586929 out.go:179]   - Using image docker.io/busybox:stable
	I1017 19:58:22.557373  586929 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:58:22.557396  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 19:58:22.557469  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.580588  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.582205  586929 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:58:22.582222  586929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:58:22.582291  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.596886  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.599765  586929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:58:22.611634  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.617988  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.647479  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.648344  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	W1017 19:58:22.652539  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.652582  586929 retry.go:31] will retry after 340.943544ms: ssh: handshake failed: EOF
	W1017 19:58:22.660379  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.660465  586929 retry.go:31] will retry after 335.738207ms: ssh: handshake failed: EOF
	I1017 19:58:22.672758  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	W1017 19:58:22.679369  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.679396  586929 retry.go:31] will retry after 186.146509ms: ssh: handshake failed: EOF
	I1017 19:58:22.684423  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.692947  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:23.080284  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:58:23.110978  586929 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:23.111057  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 19:58:23.121402  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:58:23.123019  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 19:58:23.123071  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 19:58:23.150417  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:58:23.184922  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 19:58:23.208961  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:23.212452  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 19:58:23.212478  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 19:58:23.218884  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 19:58:23.218947  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 19:58:23.316516  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 19:58:23.316582  586929 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 19:58:23.319658  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:58:23.341362  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 19:58:23.341438  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 19:58:23.353678  586929 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 19:58:23.353751  586929 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 19:58:23.366332  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:58:23.368216  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:58:23.369954  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 19:58:23.370016  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 19:58:23.386237  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 19:58:23.386309  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 19:58:23.475393  586929 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.199637854s)
	I1017 19:58:23.476267  586929 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 19:58:23.476230  586929 node_ready.go:35] waiting up to 6m0s for node "addons-948763" to be "Ready" ...
	I1017 19:58:23.511950  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 19:58:23.511972  586929 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 19:58:23.549679  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 19:58:23.549699  586929 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 19:58:23.552427  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:58:23.607297  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 19:58:23.607371  586929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 19:58:23.626276  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 19:58:23.626350  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 19:58:23.633875  586929 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:58:23.633941  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 19:58:23.642361  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:58:23.658307  586929 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:23.658376  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 19:58:23.760615  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:58:23.760705  586929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 19:58:23.762562  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 19:58:23.762619  586929 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 19:58:23.781797  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 19:58:23.781872  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 19:58:23.801328  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:58:23.863167  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:23.937053  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:58:23.937117  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 19:58:23.941299  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:58:23.970915  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 19:58:23.970982  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 19:58:23.982225  586929 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-948763" context rescaled to 1 replicas
	I1017 19:58:24.171843  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 19:58:24.171872  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 19:58:24.175044  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:58:24.295379  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 19:58:24.295406  586929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 19:58:24.616212  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 19:58:24.616286  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 19:58:24.883205  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 19:58:24.883279  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 19:58:24.988524  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:58:24.988599  586929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 19:58:25.171854  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1017 19:58:25.492521  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:26.148830  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.068503777s)
	I1017 19:58:26.183740  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.062243157s)
	I1017 19:58:26.183849  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.03340934s)
	I1017 19:58:26.183921  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.998960927s)
	I1017 19:58:26.531127  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.322104332s)
	W1017 19:58:26.531348  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:26.531389  586929 retry.go:31] will retry after 373.27079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:26.531249  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.211518835s)
	I1017 19:58:26.531320  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.164916819s)
	I1017 19:58:26.905332  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:27.260711  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.892418558s)
	I1017 19:58:27.260992  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.708505056s)
	W1017 19:58:27.496714  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:28.298594  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.65615438s)
	I1017 19:58:28.298628  586929 addons.go:479] Verifying addon ingress=true in "addons-948763"
	I1017 19:58:28.298819  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.497414959s)
	I1017 19:58:28.298842  586929 addons.go:479] Verifying addon registry=true in "addons-948763"
	I1017 19:58:28.299340  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.436090428s)
	W1017 19:58:28.299380  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:58:28.299398  586929 retry.go:31] will retry after 127.671904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:58:28.299480  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.35810973s)
	I1017 19:58:28.299495  586929 addons.go:479] Verifying addon metrics-server=true in "addons-948763"
	I1017 19:58:28.299541  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.124470517s)
	I1017 19:58:28.303009  586929 out.go:179] * Verifying registry addon...
	I1017 19:58:28.303145  586929 out.go:179] * Verifying ingress addon...
	I1017 19:58:28.304911  586929 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-948763 service yakd-dashboard -n yakd-dashboard
	
	I1017 19:58:28.307714  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 19:58:28.308421  586929 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 19:58:28.325328  586929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:58:28.325356  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:28.325640  586929 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:58:28.325657  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:28.427716  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:28.726182  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.554235945s)
	I1017 19:58:28.726211  586929 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-948763"
	I1017 19:58:28.726416  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.821057124s)
	W1017 19:58:28.726444  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:28.726463  586929 retry.go:31] will retry after 207.480469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:28.730788  586929 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 19:58:28.734608  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 19:58:28.751494  586929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:58:28.751520  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:28.812986  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:28.813229  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:28.935006  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:29.238697  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:29.313015  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:29.314058  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:29.738819  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:29.812479  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:29.813821  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:58:29.981247  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:30.059292  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 19:58:30.059397  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:30.086753  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:30.200886  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 19:58:30.214132  586929 addons.go:238] Setting addon gcp-auth=true in "addons-948763"
	I1017 19:58:30.214232  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:30.214728  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:30.232292  586929 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 19:58:30.232345  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:30.238956  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:30.251690  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:30.311985  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:30.312138  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:30.738110  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:30.812276  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:30.812379  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.238739  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:31.264444  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.836681738s)
	I1017 19:58:31.264588  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.329545594s)
	W1017 19:58:31.264628  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:31.264644  586929 retry.go:31] will retry after 495.562147ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:31.264679  586929 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.032367954s)
	I1017 19:58:31.267893  586929 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 19:58:31.270773  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:58:31.273610  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 19:58:31.273629  586929 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 19:58:31.286595  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 19:58:31.286661  586929 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 19:58:31.299611  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:58:31.299633  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 19:58:31.312611  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:31.313575  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.317298  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:58:31.744598  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:31.760635  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:31.830896  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:31.831396  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.845961  586929 addons.go:479] Verifying addon gcp-auth=true in "addons-948763"
	I1017 19:58:31.849235  586929 out.go:179] * Verifying gcp-auth addon...
	I1017 19:58:31.852402  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 19:58:31.864258  586929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 19:58:31.864281  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:32.238561  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:32.313317  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:32.314123  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:32.355659  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:32.481554  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	W1017 19:58:32.604265  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:32.604301  586929 retry.go:31] will retry after 961.088268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:32.739258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:32.812661  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:32.812797  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:32.855366  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:33.238389  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:33.311892  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:33.312262  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:33.356653  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:33.565681  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:33.738580  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:33.812241  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:33.813446  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:33.855349  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:34.239086  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:34.311630  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:34.313306  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:34.356317  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:34.378355  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:34.378388  586929 retry.go:31] will retry after 760.335078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:34.737815  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:34.811772  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:34.812290  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:34.856179  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:34.979814  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:35.138993  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:35.238857  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:35.313744  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:35.314234  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:35.356026  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:35.740268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:35.813486  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:35.814175  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:35.856115  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:35.957027  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:35.957059  586929 retry.go:31] will retry after 2.24202928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:36.237702  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:36.311862  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:36.311940  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:36.355840  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:36.738122  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:36.812688  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:36.815940  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:36.862926  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:36.980803  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:37.238284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:37.312849  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:37.313246  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:37.355869  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:37.738289  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:37.812431  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:37.812916  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:37.855545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:38.199688  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:38.238581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:38.312106  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:38.312139  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:38.356091  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:38.737680  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:38.813284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:38.814091  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:38.857236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:39.023411  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:39.023443  586929 retry.go:31] will retry after 2.002306756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:39.238435  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:39.312586  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:39.312920  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:39.355557  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:39.480352  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:39.737657  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:39.812135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:39.812282  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:39.856069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:40.238058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:40.312249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:40.312361  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:40.356145  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:40.738262  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:40.812939  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:40.813298  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:40.855863  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:41.025875  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:41.238025  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:41.314627  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:41.315083  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:41.356189  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:41.480919  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:41.738866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:41.813325  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:41.813949  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:58:41.842789  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:41.842820  586929 retry.go:31] will retry after 5.091243261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:41.855535  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:42.239423  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:42.312138  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:42.312362  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:42.355340  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:42.738480  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:42.811845  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:42.811939  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:42.855843  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:43.237784  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:43.311517  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:43.311789  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:43.355320  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:43.738337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:43.812081  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:43.812376  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:43.855391  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:43.980450  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:44.238279  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:44.312612  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:44.312980  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:44.356130  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:44.738832  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:44.812130  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:44.812211  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:44.856181  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:45.238712  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:45.313329  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:45.313485  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:45.356011  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:45.738670  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:45.811793  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:45.812005  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:45.855800  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:45.980628  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:46.238231  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:46.312485  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:46.312638  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:46.355599  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:46.738243  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:46.812232  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:46.812381  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:46.855377  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:46.934489  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:47.238581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:47.313284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:47.313525  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:47.355857  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:47.739343  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:58:47.764981  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:47.765015  586929 retry.go:31] will retry after 9.407508894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:47.812186  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:47.812391  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:47.856027  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:48.238250  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:48.312559  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:48.312642  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:48.356152  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:48.479926  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:48.737989  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:48.812167  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:48.812341  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:48.855803  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:49.238303  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:49.312346  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:49.312977  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:49.355829  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:49.737701  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:49.812043  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:49.812497  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:49.855347  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:50.238079  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:50.311932  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:50.312232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:50.355962  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:50.480587  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:50.737859  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:50.811948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:50.812146  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:50.855948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:51.238584  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:51.311554  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:51.311948  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:51.355674  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:51.738386  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:51.812126  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:51.812248  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:51.855712  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:52.237379  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:52.312875  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:52.312952  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:52.355807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:52.480745  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:52.737676  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:52.812763  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:52.812981  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:52.855892  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:53.238304  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:53.312382  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:53.312515  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:53.356807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:53.738199  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:53.812663  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:53.812910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:53.856123  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:54.238369  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:54.311898  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:54.312376  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:54.356279  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:54.738669  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:54.812546  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:54.812686  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:54.855752  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:54.980826  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:55.238403  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:55.312949  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:55.313167  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:55.356240  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:55.738119  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:55.811545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:55.811746  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:55.855537  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:56.238231  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:56.312493  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:56.312737  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:56.355925  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:56.737755  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:56.811631  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:56.811924  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:56.855709  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:57.173686  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:57.241877  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:57.313278  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:57.314309  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:57.355181  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:57.480310  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:57.738564  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:57.812794  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:57.813734  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:57.855510  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:58.007461  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:58.007495  586929 retry.go:31] will retry after 11.364388122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:58.238843  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:58.312818  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:58.313238  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:58.356183  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:58.737814  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:58.812866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:58.813020  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:58.855617  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:59.237698  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:59.311984  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:59.312144  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:59.355787  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:59.480924  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:59.738412  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:59.811398  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:59.811833  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:59.856475  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:00.239225  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:00.318909  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:00.320580  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:00.356704  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:00.738756  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:00.811520  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:00.811910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:00.855599  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:01.237828  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:01.312573  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:01.313035  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:01.355740  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:01.737716  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:01.812436  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:01.812518  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:01.855811  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:59:01.980449  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:59:02.242777  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:02.365244  586929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:59:02.365275  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:02.366866  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:02.407612  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:02.482348  586929 node_ready.go:49] node "addons-948763" is "Ready"
	I1017 19:59:02.482429  586929 node_ready.go:38] duration metric: took 39.005250438s for node "addons-948763" to be "Ready" ...
	I1017 19:59:02.482458  586929 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:59:02.482561  586929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:59:02.497489  586929 api_server.go:72] duration metric: took 40.555471355s to wait for apiserver process to appear ...
	I1017 19:59:02.497564  586929 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:59:02.497598  586929 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:59:02.510989  586929 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:59:02.516902  586929 api_server.go:141] control plane version: v1.34.1
	I1017 19:59:02.516983  586929 api_server.go:131] duration metric: took 19.399064ms to wait for apiserver health ...
	I1017 19:59:02.517006  586929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:59:02.537015  586929 system_pods.go:59] 19 kube-system pods found
	I1017 19:59:02.537108  586929 system_pods.go:61] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.537133  586929 system_pods.go:61] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending
	I1017 19:59:02.537193  586929 system_pods.go:61] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.537217  586929 system_pods.go:61] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.537249  586929 system_pods.go:61] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.537280  586929 system_pods.go:61] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.537305  586929 system_pods.go:61] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.537324  586929 system_pods.go:61] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.537358  586929 system_pods.go:61] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.537380  586929 system_pods.go:61] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.537401  586929 system_pods.go:61] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.537442  586929 system_pods.go:61] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.537477  586929 system_pods.go:61] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending
	I1017 19:59:02.537497  586929 system_pods.go:61] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending
	I1017 19:59:02.537534  586929 system_pods.go:61] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.537559  586929 system_pods.go:61] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending
	I1017 19:59:02.537583  586929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.537619  586929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending
	I1017 19:59:02.537644  586929 system_pods.go:61] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.537665  586929 system_pods.go:74] duration metric: took 20.638589ms to wait for pod list to return data ...
	I1017 19:59:02.537704  586929 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:59:02.582739  586929 default_sa.go:45] found service account: "default"
	I1017 19:59:02.582815  586929 default_sa.go:55] duration metric: took 45.086718ms for default service account to be created ...
	I1017 19:59:02.582840  586929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:59:02.637256  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:02.637390  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.637415  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending
	I1017 19:59:02.637468  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.637494  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.637534  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.637557  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.637577  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.637613  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.637639  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.637662  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.637697  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.637725  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.637747  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending
	I1017 19:59:02.637793  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending
	I1017 19:59:02.637822  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.637844  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending
	I1017 19:59:02.637889  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.637910  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending
	I1017 19:59:02.637945  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.637982  586929 retry.go:31] will retry after 271.325382ms: missing components: kube-dns
	I1017 19:59:02.744002  586929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:59:02.744070  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:02.813491  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:02.813918  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:02.857997  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:02.934861  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:02.934907  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.934935  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:02.934950  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.934956  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.934960  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.934983  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.934996  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.935001  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.935020  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.935031  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.935036  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.935056  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.935069  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:02.935076  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:02.935173  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.935189  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:02.935197  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.935222  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.935233  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.935264  586929 retry.go:31] will retry after 375.666875ms: missing components: kube-dns
	I1017 19:59:03.241332  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:03.352532  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:03.352910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:03.360779  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:03.360818  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:03.360836  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:03.360863  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:03.360876  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:03.360881  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:03.360887  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:03.360895  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:03.360899  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:03.360925  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:03.360937  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:03.360942  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:03.360957  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:03.360974  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:03.360981  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:03.361007  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:03.361019  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:03.361026  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.361037  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.361045  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:59:03.361078  586929 retry.go:31] will retry after 480.266829ms: missing components: kube-dns
	I1017 19:59:03.445079  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:03.739402  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:03.813331  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:03.813410  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:03.846588  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:03.846624  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:03.846660  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:03.846673  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:03.846681  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:03.846690  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:03.846696  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:03.846715  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:03.846726  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:03.846733  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:03.846752  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:03.846763  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:03.846770  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:03.846790  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:03.846803  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:03.846810  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:03.846836  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:03.846844  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.846867  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.846880  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:59:03.846897  586929 retry.go:31] will retry after 571.985018ms: missing components: kube-dns
	I1017 19:59:03.856399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:04.238332  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:04.339826  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:04.339902  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:04.440267  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:04.441028  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:04.441055  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Running
	I1017 19:59:04.441092  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:04.441107  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:04.441117  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:04.441128  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:04.441133  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:04.441138  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:04.441171  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:04.441187  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:04.441192  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:04.441197  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:04.441203  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:04.441215  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:04.441224  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:04.441260  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:04.441271  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:04.441286  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:04.441294  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:04.441302  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Running
	I1017 19:59:04.441311  586929 system_pods.go:126] duration metric: took 1.858452185s to wait for k8s-apps to be running ...
	I1017 19:59:04.441336  586929 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:59:04.441418  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:59:04.456100  586929 system_svc.go:56] duration metric: took 14.75464ms WaitForService to wait for kubelet
	I1017 19:59:04.456142  586929 kubeadm.go:586] duration metric: took 42.514128593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:59:04.456161  586929 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:59:04.459033  586929 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:59:04.459066  586929 node_conditions.go:123] node cpu capacity is 2
	I1017 19:59:04.459088  586929 node_conditions.go:105] duration metric: took 2.905092ms to run NodePressure ...
	I1017 19:59:04.459207  586929 start.go:241] waiting for startup goroutines ...
	I1017 19:59:04.738432  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:04.812946  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:04.813080  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:04.856012  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:05.238766  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:05.313036  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:05.313397  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:05.355844  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:05.738619  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:05.812358  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:05.812702  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:05.855535  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:06.238660  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:06.314138  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:06.314610  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:06.356685  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:06.737695  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:06.812587  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:06.814506  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:06.855436  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:07.238320  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:07.312722  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:07.312893  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:07.356122  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:07.739014  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:07.839475  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:07.839735  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:07.855834  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:08.238638  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:08.312107  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:08.312583  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:08.355935  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:08.738381  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:08.812430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:08.812579  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:08.855380  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:09.237548  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:09.312371  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:09.312473  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:09.355146  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:09.372474  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:09.739069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:09.838569  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:09.838690  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:09.939289  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:10.238058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:10.314552  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:10.315002  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:10.385258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:10.419186  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.046627359s)
	W1017 19:59:10.419217  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:10.419238  586929 retry.go:31] will retry after 11.526262814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:10.738147  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:10.812893  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:10.813005  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:10.855716  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:11.238364  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:11.313066  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:11.313154  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:11.356638  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:11.741318  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:11.814098  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:11.817835  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:11.856147  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:12.238981  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:12.313581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:12.313772  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:12.356280  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:12.738524  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:12.818606  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:12.820162  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:12.856214  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:13.239792  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:13.313411  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:13.313764  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:13.357816  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:13.739069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:13.828407  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:13.828867  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:13.857584  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:14.238452  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:14.313233  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:14.313547  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:14.355556  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:14.738132  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:14.812278  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:14.812439  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:14.855583  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:15.238306  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:15.312884  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:15.312995  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:15.356431  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:15.738894  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:15.813025  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:15.813149  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:15.856040  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:16.238613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:16.312590  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:16.313259  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:16.356345  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:16.738477  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:16.812449  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:16.812621  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:16.855331  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:17.237754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:17.312574  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:17.312789  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:17.355687  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:17.739248  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:17.814314  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:17.814926  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:17.856880  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:18.238419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:18.313618  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:18.315474  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:18.368124  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:18.739648  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:18.812968  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:18.813327  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:18.856201  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:19.238779  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:19.312921  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:19.313253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:19.356616  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:19.738890  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:19.812534  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:19.813240  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:19.855711  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:20.238529  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:20.312248  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:20.312814  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:20.355442  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:20.738266  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:20.812558  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:20.812820  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:20.856099  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.239419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:21.313384  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:21.313538  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:21.358372  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.741579  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:21.816807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:21.817232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:21.856786  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.946157  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:22.239235  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:22.314661  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:22.316239  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:22.363077  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:22.739177  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:22.814087  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:22.814924  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:22.856090  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:23.238639  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:23.313775  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:23.315073  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:23.350027  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.403726603s)
	W1017 19:59:23.350105  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:23.350140  586929 retry.go:31] will retry after 31.29214394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:23.356561  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:23.739887  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:23.813521  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:23.813856  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:23.856380  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:24.238550  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:24.312508  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:24.313135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:24.356071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:24.738067  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:24.812498  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:24.813232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:24.855858  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:25.238640  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:25.312249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:25.312444  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:25.355312  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:25.738602  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:25.812525  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:25.813218  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:25.856318  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:26.238268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:26.312662  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:26.312988  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:26.356586  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:26.738253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:26.812926  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:26.813041  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:26.865582  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:27.238253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:27.313773  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:27.313904  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:27.355981  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:27.739029  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:27.813102  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:27.813236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:27.856197  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:28.240687  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:28.339246  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:28.339837  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:28.355212  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:28.739018  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:28.814191  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:28.814522  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:28.855488  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:29.238220  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:29.312544  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:29.313135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:29.356124  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:29.738990  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:29.812218  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:29.818461  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:29.855419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:30.239292  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:30.312933  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:30.314117  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:30.355872  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:30.739058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:30.812631  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:30.812820  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:30.855934  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:31.238696  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:31.312660  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:31.312904  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:31.355973  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:31.739226  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:31.814057  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:31.814220  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:31.856206  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:32.252136  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:32.313228  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:32.313367  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:32.355822  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:32.738732  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:32.813418  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:32.814072  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:32.856060  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:33.240524  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:33.314426  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:33.314975  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:33.356877  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:33.767467  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:33.812950  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:33.813168  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:33.856227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:34.238995  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:34.340261  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:34.340409  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:34.440305  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:34.739479  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:34.812843  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:34.813215  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:34.856399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:35.239154  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:35.318083  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:35.318426  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:35.356454  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:35.738758  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:35.814268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:35.814903  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:35.856384  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:36.238227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:36.314350  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:36.314780  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:36.356237  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:36.737509  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:36.812389  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:36.812733  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:36.855815  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:37.240596  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:37.314540  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:37.314970  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:37.356415  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:37.738405  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:37.812751  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:37.813486  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:37.856012  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:38.238822  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:38.313446  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:38.313881  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:38.356522  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:38.738703  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:38.811767  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:38.812413  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:38.855884  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:39.238671  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:39.312436  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:39.312704  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:39.355319  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:39.738490  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:39.813123  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:39.813316  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:39.855588  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:40.237755  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:40.312768  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:40.312903  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:40.355865  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:40.738491  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:40.812718  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:40.814079  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:40.856548  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:41.238615  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:41.313320  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:41.313660  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:41.356226  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:41.739496  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:41.812776  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:41.812823  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:41.855727  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:42.239010  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:42.314278  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:42.314861  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:42.355905  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:42.738267  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:42.812270  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:42.814073  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:42.856227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:43.238430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:43.313649  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:43.314143  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:43.413158  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:43.740339  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:43.840944  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:43.841096  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:43.856280  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:44.239095  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:44.312549  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:44.312726  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:44.356415  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:44.739467  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:44.839866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:44.840046  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:44.856168  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:45.241190  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:45.320286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:45.319960  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:45.357049  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:45.738195  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:45.813025  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:45.813182  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:45.860286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:46.238747  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:46.312669  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:46.313353  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:46.356271  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:46.737988  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:46.813863  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:46.814260  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:46.856069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:47.241236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:47.341754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:47.341862  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:47.355581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:47.739182  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:47.840191  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:47.840329  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:47.856547  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:48.238027  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:48.312048  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:48.312551  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:48.355062  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:48.738430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:48.812531  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:48.812948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:48.855887  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:49.239856  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:49.314144  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:49.314277  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:49.356672  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:49.739664  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:49.841587  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:49.841967  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:49.856304  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:50.238020  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:50.312150  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:50.312349  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:50.355854  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:50.738567  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:50.812356  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:50.813240  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:50.856085  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:51.239005  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:51.313548  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:51.313998  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:51.414558  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:51.738677  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:51.817287  586929 kapi.go:107] duration metric: took 1m23.509571343s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 19:59:51.838722  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:51.855499  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:52.238155  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:52.311993  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:52.355785  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:52.738519  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:52.812520  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:52.856286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:53.238379  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:53.312561  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:53.355277  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:53.740731  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:53.812011  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:53.856185  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:54.238790  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:54.312102  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:54.356139  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:54.642602  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:54.738545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:54.812882  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:54.856326  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.238192  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:55.312615  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:55.355798  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.739399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:55.812019  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:55.856194  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.925878  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.283237896s)
	W1017 19:59:55.925938  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:55.925964  586929 retry.go:31] will retry after 22.626956912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:56.239237  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:56.312193  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:56.356298  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:56.738665  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:56.811844  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:56.855998  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:57.239221  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:57.312370  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:57.355374  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:57.738613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:57.813008  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:57.856928  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:58.239551  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:58.314180  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:58.357709  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:58.748350  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:58.813096  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:58.856405  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:59.240337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:59.316716  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:59.357201  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:59.810106  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:59.812462  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:59.856681  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:00.266746  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:00.334094  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:00.386071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:00.801249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:00.844017  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:00.891883  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:01.251666  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:01.340855  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:01.400754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:01.755851  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:01.817865  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:01.905611  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:02.263029  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:02.323975  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:02.379391  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:02.739514  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:02.814725  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:02.856490  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:03.238401  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:03.312925  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:03.356249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:03.738595  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:03.812312  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:03.856545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:04.239931  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:04.312578  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:04.355551  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:04.738649  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:04.812381  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:04.855321  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:05.239445  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:05.313391  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:05.356388  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:05.738906  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:05.813280  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:05.856287  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:06.238546  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:06.311859  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:06.355953  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:06.739257  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:06.839251  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:06.858832  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:07.238629  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:07.312824  586929 kapi.go:107] duration metric: took 1m39.004396914s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 20:00:07.356337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:07.738270  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:07.855695  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:08.244912  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:08.356241  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:08.739196  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:08.859190  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:09.238770  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:09.355401  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:09.740008  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:09.856613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:10.239097  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:10.356665  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:10.739369  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:10.856071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:11.239747  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:11.356258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:11.738620  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:11.856403  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:12.238133  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:12.355286  586929 kapi.go:107] duration metric: took 1m40.50288042s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 20:00:12.358582  586929 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-948763 cluster.
	I1017 20:00:12.361615  586929 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 20:00:12.364596  586929 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 20:00:12.738616  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:13.240736  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:13.739021  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:14.238382  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:14.738557  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:15.238421  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:15.739714  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:16.238638  586929 kapi.go:107] duration metric: took 1m47.504028468s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 20:00:18.553196  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 20:00:19.443501  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:19.443598  586929 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 20:00:19.446819  586929 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, registry-creds, default-storageclass, amd-gpu-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1017 20:00:19.449781  586929 addons.go:514] duration metric: took 1m57.507210179s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner registry-creds default-storageclass amd-gpu-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1017 20:00:19.449834  586929 start.go:246] waiting for cluster config update ...
	I1017 20:00:19.449857  586929 start.go:255] writing updated cluster config ...
	I1017 20:00:19.450170  586929 ssh_runner.go:195] Run: rm -f paused
	I1017 20:00:19.453859  586929 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:19.457994  586929 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f4b6j" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.464654  586929 pod_ready.go:94] pod "coredns-66bc5c9577-f4b6j" is "Ready"
	I1017 20:00:19.464687  586929 pod_ready.go:86] duration metric: took 6.662003ms for pod "coredns-66bc5c9577-f4b6j" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.467194  586929 pod_ready.go:83] waiting for pod "etcd-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.472647  586929 pod_ready.go:94] pod "etcd-addons-948763" is "Ready"
	I1017 20:00:19.472672  586929 pod_ready.go:86] duration metric: took 5.443417ms for pod "etcd-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.475382  586929 pod_ready.go:83] waiting for pod "kube-apiserver-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.480614  586929 pod_ready.go:94] pod "kube-apiserver-addons-948763" is "Ready"
	I1017 20:00:19.480642  586929 pod_ready.go:86] duration metric: took 5.232633ms for pod "kube-apiserver-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.483234  586929 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.857712  586929 pod_ready.go:94] pod "kube-controller-manager-addons-948763" is "Ready"
	I1017 20:00:19.857743  586929 pod_ready.go:86] duration metric: took 374.434418ms for pod "kube-controller-manager-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.058614  586929 pod_ready.go:83] waiting for pod "kube-proxy-qtcs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.457767  586929 pod_ready.go:94] pod "kube-proxy-qtcs2" is "Ready"
	I1017 20:00:20.457796  586929 pod_ready.go:86] duration metric: took 399.155156ms for pod "kube-proxy-qtcs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.661696  586929 pod_ready.go:83] waiting for pod "kube-scheduler-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:21.057956  586929 pod_ready.go:94] pod "kube-scheduler-addons-948763" is "Ready"
	I1017 20:00:21.057981  586929 pod_ready.go:86] duration metric: took 396.213255ms for pod "kube-scheduler-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:21.057993  586929 pod_ready.go:40] duration metric: took 1.604099186s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:21.114830  586929 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:00:21.118132  586929 out.go:179] * Done! kubectl is now configured to use "addons-948763" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:03:15 addons-948763 crio[833]: time="2025-10-17T20:03:15.733503827Z" level=info msg="Removed container e10870e12122b9e8c537a73d9fe84318d3c23b01db3442e0afb7a70c489aa9e1: kube-system/registry-creds-764b6fb674-w5f6g/registry-creds" id=15475528-c85d-45dd-9e63-e31ed416deb3 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.286174586Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-h6n8p/POD" id=656ac990-d75f-49ce-91eb-35a96a49bf3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.286236454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.297952844Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h6n8p Namespace:default ID:54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd UID:dd67d5ac-2965-4043-8426-d040031231db NetNS:/var/run/netns/477d6fc0-ed86-40ee-95ae-164e9017bc55 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001a57400}] Aliases:map[]}"
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.297999302Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-h6n8p to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.314880057Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-h6n8p Namespace:default ID:54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd UID:dd67d5ac-2965-4043-8426-d040031231db NetNS:/var/run/netns/477d6fc0-ed86-40ee-95ae-164e9017bc55 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001a57400}] Aliases:map[]}"
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.315030147Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-h6n8p for CNI network kindnet (type=ptp)"
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.324325835Z" level=info msg="Ran pod sandbox 54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd with infra container: default/hello-world-app-5d498dc89-h6n8p/POD" id=656ac990-d75f-49ce-91eb-35a96a49bf3a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.325947979Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=5fff5f7a-c5b7-48e5-919f-d53354898704 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.326218137Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=5fff5f7a-c5b7-48e5-919f-d53354898704 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.326341248Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=5fff5f7a-c5b7-48e5-919f-d53354898704 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.327433104Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=fa667bda-e954-41d0-916b-a83339510cc0 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:03:21 addons-948763 crio[833]: time="2025-10-17T20:03:21.328989745Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.203022702Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=fa667bda-e954-41d0-916b-a83339510cc0 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.203806155Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cdfdb45d-d530-4ab4-9eb8-d6f19d3ae8d1 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.211195322Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a10608f9-51ee-4491-a7c4-b5995cdb2f4f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.226489781Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-h6n8p/hello-world-app" id=eafd0144-6a9f-4469-9c0e-4ed830e7d2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.227757902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.26859453Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.268811945Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/959649f5dbaf094dff3b60bcfc37d370e161c6c6673ea4f9c4a8dcca4c0e118f/merged/etc/passwd: no such file or directory"
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.268931429Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/959649f5dbaf094dff3b60bcfc37d370e161c6c6673ea4f9c4a8dcca4c0e118f/merged/etc/group: no such file or directory"
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.269247397Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.303037756Z" level=info msg="Created container a5c24108678e9da7f69a837fe1d4900947aa65178988cbe125d10d6cd9102757: default/hello-world-app-5d498dc89-h6n8p/hello-world-app" id=eafd0144-6a9f-4469-9c0e-4ed830e7d2f2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.308725089Z" level=info msg="Starting container: a5c24108678e9da7f69a837fe1d4900947aa65178988cbe125d10d6cd9102757" id=159d7f5d-3d25-49a9-bfef-a68ea22ef200 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:03:22 addons-948763 crio[833]: time="2025-10-17T20:03:22.311843615Z" level=info msg="Started container" PID=7212 containerID=a5c24108678e9da7f69a837fe1d4900947aa65178988cbe125d10d6cd9102757 description=default/hello-world-app-5d498dc89-h6n8p/hello-world-app id=159d7f5d-3d25-49a9-bfef-a68ea22ef200 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	a5c24108678e9       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   54664431205c6       hello-world-app-5d498dc89-h6n8p             default
	78cd49d5b52c0       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             8 seconds ago            Exited              registry-creds                           1                   e0157d88e582f       registry-creds-764b6fb674-w5f6g             kube-system
	5525a29c26143       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0                                              2 minutes ago            Running             nginx                                    0                   846b0d8a0fd36       nginx                                       default
	0c243b948deea       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   d9a6bc325601e       busybox                                     default
	62f5238d05d6d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	d36d10bdfcbed       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	c53a40a0c62b8       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	d595a7efff522       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   bfefd79051e1f       gcp-auth-78565c9fb4-k9ct2                   gcp-auth
	bcc361491949c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	ca753fcea18a0       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago            Running             controller                               0                   dfc9ff6468ac1       ingress-nginx-controller-675c5ddd98-xc8ch   ingress-nginx
	8aa75ff35c2a2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago            Running             gadget                                   0                   940634f7b50cf       gadget-dd22p                                gadget
	ef912ed7d00d7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	b9d12f5fbfa6c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   b719d74efe38f       csi-hostpath-resizer-0                      kube-system
	9bc6a4da19b29       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   dee4186013164       registry-proxy-5jjqn                        kube-system
	840a4de138b52       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   09377d604db1a       registry-6b586f9694-pj8zh                   kube-system
	ff6d474d02691       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              patch                                    0                   7b781615d9962       ingress-nginx-admission-patch-kvpgp         ingress-nginx
	ec5f8e2b041c5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   373a7efef9a5e       local-path-provisioner-648f6765c9-6fbwx     local-path-storage
	63ee152cb5bc0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   eb43cc5255279       snapshot-controller-7d9fbc56b8-pp66v        kube-system
	16b93abda1117       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   b1914199607b0       nvidia-device-plugin-daemonset-7vw8v        kube-system
	44c18bb7ee583       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   e13cb9e72a373       snapshot-controller-7d9fbc56b8-89gvs        kube-system
	21d5cbb832a96       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	65f41a91bc9c3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago            Exited              create                                   0                   61c4a778a4f3a       ingress-nginx-admission-create-z4cxf        ingress-nginx
	eb8c078b44245       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   4f400b83a557b       yakd-dashboard-5ff678cb9-rg2kq              yakd-dashboard
	64a714f55a7cc       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               4 minutes ago            Running             minikube-ingress-dns                     0                   7844c7f9e5a41       kube-ingress-dns-minikube                   kube-system
	43e9813ed2b67       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   170736af7f9fc       metrics-server-85b7d694d7-h9xx7             kube-system
	f09c7137954ed       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               4 minutes ago            Running             cloud-spanner-emulator                   0                   e2a5618919ceb       cloud-spanner-emulator-86bd5cbb97-mhdbb     default
	6573cd1e55f00       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             4 minutes ago            Running             csi-attacher                             0                   63473a0ba4a1d       csi-hostpath-attacher-0                     kube-system
	5a104d2bf0866       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   78354a824348c       storage-provisioner                         kube-system
	ae878718d77b6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   6e8efb9135264       coredns-66bc5c9577-f4b6j                    kube-system
	c2aef7690fa71       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             5 minutes ago            Running             kindnet-cni                              0                   c4dbec0c991dc       kindnet-kr7qd                               kube-system
	1d1813b82c050       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             5 minutes ago            Running             kube-proxy                               0                   7dca2e12231ba       kube-proxy-qtcs2                            kube-system
	db1d8cdc9b83a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago            Running             kube-scheduler                           0                   1283835cc9a0d       kube-scheduler-addons-948763                kube-system
	e643c6e152656       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago            Running             kube-apiserver                           0                   64e630e41e389       kube-apiserver-addons-948763                kube-system
	fc4fe4ea2862e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago            Running             etcd                                     0                   7be534eaf84aa       etcd-addons-948763                          kube-system
	cf9507fdd5ef1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago            Running             kube-controller-manager                  0                   18ec12507bef3       kube-controller-manager-addons-948763       kube-system
	
	
	==> coredns [ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220] <==
	[INFO] 10.244.0.16:56351 - 47886 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001711237s
	[INFO] 10.244.0.16:56351 - 45493 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114193s
	[INFO] 10.244.0.16:56351 - 31310 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000082701s
	[INFO] 10.244.0.16:40334 - 65249 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168962s
	[INFO] 10.244.0.16:40334 - 64788 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097667s
	[INFO] 10.244.0.16:41485 - 6197 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095001s
	[INFO] 10.244.0.16:41485 - 5975 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073412s
	[INFO] 10.244.0.16:46475 - 39076 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099997s
	[INFO] 10.244.0.16:46475 - 38896 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087049s
	[INFO] 10.244.0.16:55022 - 26664 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001622752s
	[INFO] 10.244.0.16:55022 - 26483 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001362128s
	[INFO] 10.244.0.16:39227 - 48049 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104707s
	[INFO] 10.244.0.16:39227 - 47637 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148048s
	[INFO] 10.244.0.21:49388 - 27648 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159469s
	[INFO] 10.244.0.21:49791 - 7471 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080428s
	[INFO] 10.244.0.21:55361 - 2130 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145487s
	[INFO] 10.244.0.21:33493 - 4883 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104789s
	[INFO] 10.244.0.21:55009 - 53337 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093031s
	[INFO] 10.244.0.21:42170 - 64721 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062146s
	[INFO] 10.244.0.21:41291 - 33418 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002650376s
	[INFO] 10.244.0.21:56465 - 20604 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002579466s
	[INFO] 10.244.0.21:34241 - 21393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001646078s
	[INFO] 10.244.0.21:60921 - 21298 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006457084s
	[INFO] 10.244.0.23:54043 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000194021s
	[INFO] 10.244.0.23:50916 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00012764s
	
	
	==> describe nodes <==
	Name:               addons-948763
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-948763
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=addons-948763
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_58_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-948763
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-948763"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-948763
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:03:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:02:32 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:02:32 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:02:32 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:02:32 +0000   Fri, 17 Oct 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-948763
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                559e7844-22fd-4610-b77e-f56f5f74096c
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-86bd5cbb97-mhdbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  default                     hello-world-app-5d498dc89-h6n8p              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-dd22p                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  gcp-auth                    gcp-auth-78565c9fb4-k9ct2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xc8ch    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m55s
	  kube-system                 coredns-66bc5c9577-f4b6j                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     5m2s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 csi-hostpathplugin-7b6l4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 etcd-addons-948763                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m7s
	  kube-system                 kindnet-kr7qd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      5m2s
	  kube-system                 kube-apiserver-addons-948763                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 kube-controller-manager-addons-948763        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  kube-system                 kube-proxy-qtcs2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 kube-scheduler-addons-948763                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m7s
	  kube-system                 metrics-server-85b7d694d7-h9xx7              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m57s
	  kube-system                 nvidia-device-plugin-daemonset-7vw8v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 registry-6b586f9694-pj8zh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 registry-creds-764b6fb674-w5f6g              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 registry-proxy-5jjqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 snapshot-controller-7d9fbc56b8-89gvs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-pp66v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m57s
	  local-path-storage          local-path-provisioner-648f6765c9-6fbwx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rg2kq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 5m     kube-proxy       
	  Normal   Starting                 5m7s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m7s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m7s   kubelet          Node addons-948763 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m7s   kubelet          Node addons-948763 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m7s   kubelet          Node addons-948763 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           5m3s   node-controller  Node addons-948763 event: Registered Node addons-948763 in Controller
	  Normal   NodeReady                4m21s  kubelet          Node addons-948763 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2] <==
	{"level":"warn","ts":"2025-10-17T19:58:11.856505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.888353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.907528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.949185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.981071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.997654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.028150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.052057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.090707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.105162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.140033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.156172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.183344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.227456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.248388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.275733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.311713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.339240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.497337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:28.863635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:28.882234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.546650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.563923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.614510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.629734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38892","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d595a7efff522120ff8e5dafc7c1075abd3e649ad90a9ffd29fd55e41beacfed] <==
	2025/10/17 20:00:11 GCP Auth Webhook started!
	2025/10/17 20:00:21 Ready to marshal response ...
	2025/10/17 20:00:21 Ready to write response ...
	2025/10/17 20:00:22 Ready to marshal response ...
	2025/10/17 20:00:22 Ready to write response ...
	2025/10/17 20:00:22 Ready to marshal response ...
	2025/10/17 20:00:22 Ready to write response ...
	2025/10/17 20:00:43 Ready to marshal response ...
	2025/10/17 20:00:43 Ready to write response ...
	2025/10/17 20:00:47 Ready to marshal response ...
	2025/10/17 20:00:47 Ready to write response ...
	2025/10/17 20:00:59 Ready to marshal response ...
	2025/10/17 20:00:59 Ready to write response ...
	2025/10/17 20:01:06 Ready to marshal response ...
	2025/10/17 20:01:06 Ready to write response ...
	2025/10/17 20:01:17 Ready to marshal response ...
	2025/10/17 20:01:17 Ready to write response ...
	2025/10/17 20:01:17 Ready to marshal response ...
	2025/10/17 20:01:17 Ready to write response ...
	2025/10/17 20:01:28 Ready to marshal response ...
	2025/10/17 20:01:28 Ready to write response ...
	2025/10/17 20:03:20 Ready to marshal response ...
	2025/10/17 20:03:20 Ready to write response ...
	
	
	==> kernel <==
	 20:03:23 up  2:45,  0 user,  load average: 0.68, 2.38, 3.04
	Linux addons-948763 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883] <==
	I1017 20:01:22.121310       1 main.go:301] handling current node
	I1017 20:01:32.120908       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:01:32.120962       1 main.go:301] handling current node
	I1017 20:01:42.120816       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:01:42.120945       1 main.go:301] handling current node
	I1017 20:01:52.120618       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:01:52.120651       1 main.go:301] handling current node
	I1017 20:02:02.120411       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:02.120448       1 main.go:301] handling current node
	I1017 20:02:12.121083       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:12.121116       1 main.go:301] handling current node
	I1017 20:02:22.121114       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:22.121146       1 main.go:301] handling current node
	I1017 20:02:32.120704       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:32.120738       1 main.go:301] handling current node
	I1017 20:02:42.120621       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:42.120661       1 main.go:301] handling current node
	I1017 20:02:52.120693       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:02:52.120728       1 main.go:301] handling current node
	I1017 20:03:02.120711       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:03:02.120745       1 main.go:301] handling current node
	I1017 20:03:12.120535       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:03:12.120569       1 main.go:301] handling current node
	I1017 20:03:22.121265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:03:22.121301       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592] <==
	E1017 19:59:02.276334       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:02.277165       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.110.121:443: connect: connection refused
	E1017 19:59:02.277201       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:02.329580       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.110.121:443: connect: connection refused
	E1017 19:59:02.329624       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:13.455511       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:59:13.455596       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 19:59:13.459848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.460536       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.465760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.487344       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.529066       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.610155       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.688429       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	I1017 19:59:13.908783       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 20:00:31.454184       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55892: use of closed network connection
	E1017 20:00:31.683446       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55924: use of closed network connection
	E1017 20:00:31.813148       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55944: use of closed network connection
	I1017 20:00:59.782298       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1017 20:01:00.363550       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.143.69"}
	I1017 20:01:00.821839       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1017 20:03:21.166046       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.181.84"}
	
	
	==> kube-controller-manager [cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7] <==
	I1017 19:58:20.579447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:58:20.579550       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:58:20.579587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:58:20.579689       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:58:20.581166       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:58:20.581274       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 19:58:20.582562       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:58:20.582706       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:58:20.583184       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:58:20.583221       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:58:20.583192       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:58:20.583271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:58:20.587087       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:58:20.589517       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:58:20.593046       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:58:20.594796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:58:26.858304       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 19:58:50.539615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:58:50.539780       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 19:58:50.539833       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 19:58:50.602473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 19:58:50.606863       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 19:58:50.640951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:58:50.707551       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:59:05.542589       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b] <==
	I1017 19:58:21.877711       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:58:21.968718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:58:22.069803       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:58:22.069853       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:58:22.069933       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:58:22.174457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:58:22.174568       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:58:22.187346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:58:22.187723       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:58:22.187738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:58:22.206810       1 config.go:200] "Starting service config controller"
	I1017 19:58:22.206831       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:58:22.206854       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:58:22.206858       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:58:22.206868       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:58:22.206872       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:58:22.223668       1 config.go:309] "Starting node config controller"
	I1017 19:58:22.223690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:58:22.223706       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:58:22.309025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:58:22.309061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:58:22.309346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7] <==
	I1017 19:58:14.769751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:58:14.773074       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:58:14.773241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:58:14.773350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:58:14.773411       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:58:14.783841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:58:14.783947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:58:14.784000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:58:14.784054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:58:14.784099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:58:14.784145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:58:14.784180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:58:14.784274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:58:14.787654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:58:14.787731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:58:14.788082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:58:14.788198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:58:14.788336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:58:14.788383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:58:14.788453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:58:14.788682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:58:14.788789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:58:14.788868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:58:14.788915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1017 19:58:15.673948       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:01:30 addons-948763 kubelet[1282]: I1017 20:01:30.505994    1282 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f8be49f3-4261-49fb-a532-d84069a75202-gcp-creds\") on node \"addons-948763\" DevicePath \"\""
	Oct 17 20:01:30 addons-948763 kubelet[1282]: I1017 20:01:30.506005    1282 reconciler_common.go:299] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/f8be49f3-4261-49fb-a532-d84069a75202-data\") on node \"addons-948763\" DevicePath \"\""
	Oct 17 20:01:30 addons-948763 kubelet[1282]: I1017 20:01:30.506018    1282 reconciler_common.go:299] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/f8be49f3-4261-49fb-a532-d84069a75202-script\") on node \"addons-948763\" DevicePath \"\""
	Oct 17 20:01:31 addons-948763 kubelet[1282]: I1017 20:01:31.333126    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="125b5d798702ff074d41aef5b05a8e4f8da8c26d58c5829bf41c277efa56fa45"
	Oct 17 20:01:31 addons-948763 kubelet[1282]: E1017 20:01:31.335180    1282 status_manager.go:1018] "Failed to get status for pod" err="pods \"helper-pod-delete-pvc-75e5d985-dd82-4e2e-bc28-1cc76f6e0618\" is forbidden: User \"system:node:addons-948763\" cannot get resource \"pods\" in API group \"\" in the namespace \"local-path-storage\": no relationship found between node 'addons-948763' and this object" podUID="f8be49f3-4261-49fb-a532-d84069a75202" pod="local-path-storage/helper-pod-delete-pvc-75e5d985-dd82-4e2e-bc28-1cc76f6e0618"
	Oct 17 20:01:32 addons-948763 kubelet[1282]: I1017 20:01:32.688502    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8be49f3-4261-49fb-a532-d84069a75202" path="/var/lib/kubelet/pods/f8be49f3-4261-49fb-a532-d84069a75202/volumes"
	Oct 17 20:02:06 addons-948763 kubelet[1282]: I1017 20:02:06.686134    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5jjqn" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:02:16 addons-948763 kubelet[1282]: I1017 20:02:16.686770    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7vw8v" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:02:16 addons-948763 kubelet[1282]: I1017 20:02:16.859595    1282 scope.go:117] "RemoveContainer" containerID="b2ffc59641715e4868f284bccd41959a9aa7327ba613c4992ffbdcfc1135e53d"
	Oct 17 20:02:16 addons-948763 kubelet[1282]: I1017 20:02:16.878988    1282 scope.go:117] "RemoveContainer" containerID="f72344d189545eeb5c7b5940525ec101e2c916f97bc03496605afa45fe0ad52b"
	Oct 17 20:02:34 addons-948763 kubelet[1282]: I1017 20:02:34.686056    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-pj8zh" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:03:12 addons-948763 kubelet[1282]: I1017 20:03:12.285805    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-w5f6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:03:14 addons-948763 kubelet[1282]: I1017 20:03:14.689902    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-w5f6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:03:14 addons-948763 kubelet[1282]: I1017 20:03:14.689967    1282 scope.go:117] "RemoveContainer" containerID="e10870e12122b9e8c537a73d9fe84318d3c23b01db3442e0afb7a70c489aa9e1"
	Oct 17 20:03:15 addons-948763 kubelet[1282]: I1017 20:03:15.697785    1282 scope.go:117] "RemoveContainer" containerID="e10870e12122b9e8c537a73d9fe84318d3c23b01db3442e0afb7a70c489aa9e1"
	Oct 17 20:03:15 addons-948763 kubelet[1282]: I1017 20:03:15.698791    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-w5f6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:03:15 addons-948763 kubelet[1282]: I1017 20:03:15.698959    1282 scope.go:117] "RemoveContainer" containerID="78cd49d5b52c0975b66a04b3bd1429d476141737a646b6a8205ed8807a52d3db"
	Oct 17 20:03:15 addons-948763 kubelet[1282]: E1017 20:03:15.699299    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-w5f6g_kube-system(e0294cdc-575e-4386-8522-945e51a2b371)\"" pod="kube-system/registry-creds-764b6fb674-w5f6g" podUID="e0294cdc-575e-4386-8522-945e51a2b371"
	Oct 17 20:03:16 addons-948763 kubelet[1282]: I1017 20:03:16.703259    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-w5f6g" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 20:03:16 addons-948763 kubelet[1282]: I1017 20:03:16.703317    1282 scope.go:117] "RemoveContainer" containerID="78cd49d5b52c0975b66a04b3bd1429d476141737a646b6a8205ed8807a52d3db"
	Oct 17 20:03:16 addons-948763 kubelet[1282]: E1017 20:03:16.703450    1282 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-w5f6g_kube-system(e0294cdc-575e-4386-8522-945e51a2b371)\"" pod="kube-system/registry-creds-764b6fb674-w5f6g" podUID="e0294cdc-575e-4386-8522-945e51a2b371"
	Oct 17 20:03:21 addons-948763 kubelet[1282]: I1017 20:03:21.051637    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dd67d5ac-2965-4043-8426-d040031231db-gcp-creds\") pod \"hello-world-app-5d498dc89-h6n8p\" (UID: \"dd67d5ac-2965-4043-8426-d040031231db\") " pod="default/hello-world-app-5d498dc89-h6n8p"
	Oct 17 20:03:21 addons-948763 kubelet[1282]: I1017 20:03:21.051708    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvpt5\" (UniqueName: \"kubernetes.io/projected/dd67d5ac-2965-4043-8426-d040031231db-kube-api-access-fvpt5\") pod \"hello-world-app-5d498dc89-h6n8p\" (UID: \"dd67d5ac-2965-4043-8426-d040031231db\") " pod="default/hello-world-app-5d498dc89-h6n8p"
	Oct 17 20:03:21 addons-948763 kubelet[1282]: W1017 20:03:21.321605    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/crio-54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd WatchSource:0}: Error finding container 54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd: Status 404 returned error can't find the container with id 54664431205c6495b58921e90ce60b46fc95232adaaefcd33fa75945ecc663fd
	Oct 17 20:03:22 addons-948763 kubelet[1282]: I1017 20:03:22.749159    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-h6n8p" podStartSLOduration=1.869535559 podStartE2EDuration="2.749131277s" podCreationTimestamp="2025-10-17 20:03:20 +0000 UTC" firstStartedPulling="2025-10-17 20:03:21.326672732 +0000 UTC m=+304.765117272" lastFinishedPulling="2025-10-17 20:03:22.20626845 +0000 UTC m=+305.644712990" observedRunningTime="2025-10-17 20:03:22.74857698 +0000 UTC m=+306.187021528" watchObservedRunningTime="2025-10-17 20:03:22.749131277 +0000 UTC m=+306.187575817"
	
	
	==> storage-provisioner [5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a] <==
	W1017 20:02:58.847576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:00.850833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:00.855527       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:02.859296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:02.863804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:04.867185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:04.873935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:06.877377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:06.883635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:08.886300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:08.892770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:10.897173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:10.908059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:12.911804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:12.920276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:14.923608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:14.930704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:16.935359       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:16.940049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:18.942819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:18.947669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:20.979863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:21.044764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:23.054217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:03:23.059732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-948763 -n addons-948763
helpers_test.go:269: (dbg) Run:  kubectl --context addons-948763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp: exit status 1 (127.183855ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z4cxf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvpgp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (277.453279ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:03:24.478800  596546 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:03:24.480073  596546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:03:24.480127  596546 out.go:374] Setting ErrFile to fd 2...
	I1017 20:03:24.480149  596546 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:03:24.480493  596546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:03:24.480912  596546 mustload.go:65] Loading cluster: addons-948763
	I1017 20:03:24.481426  596546 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:03:24.481470  596546 addons.go:606] checking whether the cluster is paused
	I1017 20:03:24.481626  596546 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:03:24.481668  596546 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:03:24.482409  596546 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:03:24.500407  596546 ssh_runner.go:195] Run: systemctl --version
	I1017 20:03:24.500469  596546 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:03:24.517887  596546 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:03:24.622064  596546 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:03:24.622171  596546 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:03:24.654659  596546 cri.go:89] found id: "78cd49d5b52c0975b66a04b3bd1429d476141737a646b6a8205ed8807a52d3db"
	I1017 20:03:24.654735  596546 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:03:24.654758  596546 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:03:24.654785  596546 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:03:24.654818  596546 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:03:24.654840  596546 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:03:24.654860  596546 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:03:24.654881  596546 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:03:24.654917  596546 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:03:24.654936  596546 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:03:24.654954  596546 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:03:24.654985  596546 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:03:24.655009  596546 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:03:24.655027  596546 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:03:24.655046  596546 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:03:24.655078  596546 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:03:24.655123  596546 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:03:24.655145  596546 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:03:24.655165  596546 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:03:24.655200  596546 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:03:24.655229  596546 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:03:24.655244  596546 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:03:24.655249  596546 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:03:24.655252  596546 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:03:24.655255  596546 cri.go:89] found id: ""
	I1017 20:03:24.655340  596546 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:03:24.670832  596546 out.go:203] 
	W1017 20:03:24.673762  596546 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:03:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:03:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:03:24.673790  596546 out.go:285] * 
	* 
	W1017 20:03:24.681047  596546 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:03:24.684102  596546 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable ingress --alsologtostderr -v=1: exit status 11 (281.250304ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:03:24.749777  596589 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:03:24.750633  596589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:03:24.750678  596589 out.go:374] Setting ErrFile to fd 2...
	I1017 20:03:24.750701  596589 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:03:24.751031  596589 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:03:24.751437  596589 mustload.go:65] Loading cluster: addons-948763
	I1017 20:03:24.751886  596589 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:03:24.752040  596589 addons.go:606] checking whether the cluster is paused
	I1017 20:03:24.752189  596589 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:03:24.752232  596589 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:03:24.752756  596589 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:03:24.770081  596589 ssh_runner.go:195] Run: systemctl --version
	I1017 20:03:24.770142  596589 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:03:24.788608  596589 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:03:24.902817  596589 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:03:24.902897  596589 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:03:24.937884  596589 cri.go:89] found id: "78cd49d5b52c0975b66a04b3bd1429d476141737a646b6a8205ed8807a52d3db"
	I1017 20:03:24.937903  596589 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:03:24.937908  596589 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:03:24.937912  596589 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:03:24.937920  596589 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:03:24.937924  596589 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:03:24.937927  596589 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:03:24.937930  596589 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:03:24.937933  596589 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:03:24.937938  596589 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:03:24.937941  596589 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:03:24.937944  596589 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:03:24.937947  596589 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:03:24.937950  596589 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:03:24.937953  596589 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:03:24.937958  596589 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:03:24.937961  596589 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:03:24.937965  596589 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:03:24.937968  596589 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:03:24.937971  596589 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:03:24.937976  596589 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:03:24.937979  596589 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:03:24.937982  596589 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:03:24.937985  596589 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:03:24.937987  596589 cri.go:89] found id: ""
	I1017 20:03:24.938038  596589 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:03:24.954398  596589 out.go:203] 
	W1017 20:03:24.957327  596589 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:03:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:03:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:03:24.957354  596589 out.go:285] * 
	* 
	W1017 20:03:24.964654  596589 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:03:24.967770  596589 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.50s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dd22p" [2432efb2-8845-4765-abb0-6bd8aac44544] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004462607s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (266.812847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:59.252974  594101 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:59.253795  594101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:59.253837  594101 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:59.253859  594101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:59.254167  594101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:59.254541  594101 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:59.254998  594101 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:59.255050  594101 addons.go:606] checking whether the cluster is paused
	I1017 20:00:59.255253  594101 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:59.255296  594101 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:59.255789  594101 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:59.272423  594101 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:59.272484  594101 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:59.292241  594101 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:59.402278  594101 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:59.402406  594101 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:59.431741  594101 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:59.431763  594101 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:59.431768  594101 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:59.431772  594101 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:59.431775  594101 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:59.431779  594101 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:59.431782  594101 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:59.431790  594101 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:59.431793  594101 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:59.431801  594101 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:59.431810  594101 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:59.431813  594101 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:59.431816  594101 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:59.431819  594101 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:59.431823  594101 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:59.431830  594101 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:59.431838  594101 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:59.431843  594101 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:59.431846  594101 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:59.431849  594101 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:59.431854  594101 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:59.431857  594101 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:59.431863  594101 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:59.431870  594101 cri.go:89] found id: ""
	I1017 20:00:59.431921  594101 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:59.448762  594101 out.go:203] 
	W1017 20:00:59.452235  594101 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:59.452258  594101 out.go:285] * 
	* 
	W1017 20:00:59.460697  594101 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:59.464001  594101 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.49s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 11.715917ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003444903s
addons_test.go:463: (dbg) Run:  kubectl --context addons-948763 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (332.37533ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:53.937068  593956 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:53.937793  593956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:53.937805  593956 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:53.937810  593956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:53.938080  593956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:53.938376  593956 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:53.938759  593956 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:53.938769  593956 addons.go:606] checking whether the cluster is paused
	I1017 20:00:53.938870  593956 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:53.938886  593956 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:53.939393  593956 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:53.957348  593956 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:53.957407  593956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:53.986797  593956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:54.094959  593956 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:54.095053  593956 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:54.160580  593956 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:54.160605  593956 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:54.160611  593956 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:54.160615  593956 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:54.160619  593956 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:54.160623  593956 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:54.160626  593956 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:54.160629  593956 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:54.160632  593956 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:54.160638  593956 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:54.160642  593956 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:54.160645  593956 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:54.160648  593956 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:54.160652  593956 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:54.160655  593956 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:54.160660  593956 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:54.160668  593956 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:54.160672  593956 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:54.160676  593956 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:54.160679  593956 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:54.160684  593956 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:54.160691  593956 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:54.160694  593956 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:54.160697  593956 cri.go:89] found id: ""
	I1017 20:00:54.160747  593956 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:54.179012  593956 out.go:203] 
	W1017 20:00:54.182469  593956 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:54.182556  593956 out.go:285] * 
	* 
	W1017 20:00:54.189967  593956 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:54.193382  593956 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.49s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1017 20:00:35.227277  586172 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1017 20:00:35.230645  586172 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1017 20:00:35.230680  586172 kapi.go:107] duration metric: took 3.425164ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.438088ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-948763 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-948763 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [385a4620-a5e7-459d-8085-4e055fddc2b7] Pending
helpers_test.go:352: "task-pv-pod" [385a4620-a5e7-459d-8085-4e055fddc2b7] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [385a4620-a5e7-459d-8085-4e055fddc2b7] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.003932171s
addons_test.go:572: (dbg) Run:  kubectl --context addons-948763 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-948763 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-948763 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-948763 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-948763 delete pod task-pv-pod: (1.282915444s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-948763 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-948763 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-948763 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b836d72f-b421-4cee-94aa-1096d007402c] Pending
helpers_test.go:352: "task-pv-pod-restore" [b836d72f-b421-4cee-94aa-1096d007402c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b836d72f-b421-4cee-94aa-1096d007402c] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003284484s
addons_test.go:614: (dbg) Run:  kubectl --context addons-948763 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-948763 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-948763 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (275.70595ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:16.018122  594778 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:16.019208  594778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.019228  594778 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:16.019236  594778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.019553  594778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:16.019918  594778 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:16.020363  594778 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.020388  594778 addons.go:606] checking whether the cluster is paused
	I1017 20:01:16.020507  594778 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.020531  594778 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:16.021039  594778 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:16.040832  594778 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:16.040902  594778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:16.061752  594778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:16.170258  594778 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:16.170347  594778 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:16.204064  594778 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:16.204137  594778 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:16.204158  594778 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:16.204176  594778 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:16.204194  594778 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:16.204215  594778 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:16.204242  594778 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:16.204262  594778 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:16.204281  594778 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:16.204303  594778 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:16.204335  594778 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:16.204364  594778 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:16.204385  594778 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:16.204405  594778 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:16.204424  594778 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:16.204446  594778 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:16.204474  594778 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:16.204496  594778 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:16.204515  594778 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:16.204534  594778 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:16.204558  594778 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:16.204577  594778 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:16.204595  594778 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:16.204613  594778 cri.go:89] found id: ""
	I1017 20:01:16.204681  594778 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:16.220613  594778 out.go:203] 
	W1017 20:01:16.223558  594778 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:16.223589  594778 out.go:285] * 
	* 
	W1017 20:01:16.230942  594778 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:16.233816  594778 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (280.295955ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:16.302029  594823 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:16.302872  594823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.302889  594823 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:16.302901  594823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:16.303290  594823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:16.303666  594823 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:16.304098  594823 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.304115  594823 addons.go:606] checking whether the cluster is paused
	I1017 20:01:16.304271  594823 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:16.304305  594823 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:16.304868  594823 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:16.322126  594823 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:16.322192  594823 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:16.340526  594823 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:16.449975  594823 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:16.450111  594823 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:16.485070  594823 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:16.485093  594823 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:16.485098  594823 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:16.485101  594823 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:16.485104  594823 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:16.485119  594823 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:16.485123  594823 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:16.485127  594823 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:16.485130  594823 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:16.485137  594823 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:16.485140  594823 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:16.485143  594823 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:16.485146  594823 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:16.485150  594823 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:16.485154  594823 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:16.485160  594823 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:16.485163  594823 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:16.485167  594823 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:16.485170  594823 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:16.485172  594823 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:16.485177  594823 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:16.485185  594823 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:16.485188  594823 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:16.485191  594823 cri.go:89] found id: ""
	I1017 20:01:16.485242  594823 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:16.500436  594823 out.go:203] 
	W1017 20:01:16.503548  594823 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:16.503581  594823 out.go:285] * 
	* 
	W1017 20:01:16.510827  594823 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:16.514166  594823 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (41.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-948763 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-948763 --alsologtostderr -v=1: exit status 11 (267.585427ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:32.164749  593097 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:32.165592  593097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:32.165607  593097 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:32.165612  593097 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:32.165917  593097 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:32.166269  593097 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:32.166745  593097 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:32.166766  593097 addons.go:606] checking whether the cluster is paused
	I1017 20:00:32.166939  593097 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:32.166979  593097 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:32.167606  593097 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:32.185610  593097 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:32.185672  593097 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:32.204065  593097 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:32.306823  593097 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:32.306905  593097 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:32.339409  593097 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:32.339440  593097 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:32.339445  593097 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:32.339450  593097 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:32.339453  593097 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:32.339457  593097 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:32.339460  593097 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:32.339463  593097 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:32.339467  593097 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:32.339476  593097 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:32.339480  593097 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:32.339483  593097 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:32.339486  593097 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:32.339489  593097 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:32.339492  593097 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:32.339499  593097 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:32.339503  593097 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:32.339508  593097 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:32.339511  593097 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:32.339514  593097 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:32.339519  593097 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:32.339522  593097 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:32.339525  593097 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:32.339528  593097 cri.go:89] found id: ""
	I1017 20:00:32.339584  593097 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:32.354656  593097 out.go:203] 
	W1017 20:00:32.357528  593097 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:32.357557  593097 out.go:285] * 
	* 
	W1017 20:00:32.364905  593097 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:32.367624  593097 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-948763 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-948763
helpers_test.go:243: (dbg) docker inspect addons-948763:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440",
	        "Created": "2025-10-17T19:57:52.38390509Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587328,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T19:57:52.446314301Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/hosts",
	        "LogPath": "/var/lib/docker/containers/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440-json.log",
	        "Name": "/addons-948763",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-948763:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-948763",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440",
	                "LowerDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/72392cfbf5f94e6ea59c1f27f7dd30c2ab1a70f952e8068c2a84827dd662693d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-948763",
	                "Source": "/var/lib/docker/volumes/addons-948763/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-948763",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-948763",
	                "name.minikube.sigs.k8s.io": "addons-948763",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8edc7c9f4b4d958807db6a9119427afa05a32700103e91267047f8f774543c65",
	            "SandboxKey": "/var/run/docker/netns/8edc7c9f4b4d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-948763": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:e3:48:f3:fb:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6c4d40919db09851872993f602342c89bd57e0bb2321056f6e797ba7ad60426",
	                    "EndpointID": "042e0d6b37d851c8e012c5c4fe0ed0edb994b9afb914ad39c3400941c2e92be0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-948763",
	                        "5d47ee6e89dc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-948763 -n addons-948763
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-948763 logs -n 25: (1.480741185s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-011118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-011118   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p download-only-011118                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-011118   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -o=json --download-only -p download-only-506703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-506703   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p download-only-506703                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-506703   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p download-only-011118                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-011118   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p download-only-506703                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-506703   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ --download-only -p download-docker-785685 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-785685 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ -p download-docker-785685                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-785685 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ --download-only -p binary-mirror-465085 --alsologtostderr --binary-mirror http://127.0.0.1:39883 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-465085   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ -p binary-mirror-465085                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-465085   │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ addons  │ enable dashboard -p addons-948763                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ addons  │ disable dashboard -p addons-948763                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ start   │ -p addons-948763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 20:00 UTC │
	│ addons  │ addons-948763 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ addons-948763 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	│ addons  │ enable headlamp -p addons-948763 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-948763          │ jenkins │ v1.37.0 │ 17 Oct 25 20:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:57:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:57:25.708417  586929 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:57:25.708534  586929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:25.708543  586929 out.go:374] Setting ErrFile to fd 2...
	I1017 19:57:25.708549  586929 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:25.708805  586929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 19:57:25.709290  586929 out.go:368] Setting JSON to false
	I1017 19:57:25.710149  586929 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9592,"bootTime":1760721454,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 19:57:25.710217  586929 start.go:141] virtualization:  
	I1017 19:57:25.715299  586929 out.go:179] * [addons-948763] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:57:25.718340  586929 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 19:57:25.718431  586929 notify.go:220] Checking for updates...
	I1017 19:57:25.724023  586929 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:57:25.726932  586929 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:57:25.729762  586929 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 19:57:25.732487  586929 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 19:57:25.735366  586929 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 19:57:25.738524  586929 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:57:25.762378  586929 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:57:25.762519  586929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:25.836686  586929 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 19:57:25.827835294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:25.836793  586929 docker.go:318] overlay module found
	I1017 19:57:25.839890  586929 out.go:179] * Using the docker driver based on user configuration
	I1017 19:57:25.842642  586929 start.go:305] selected driver: docker
	I1017 19:57:25.842658  586929 start.go:925] validating driver "docker" against <nil>
	I1017 19:57:25.842673  586929 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 19:57:25.843449  586929 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:25.896195  586929 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:49 SystemTime:2025-10-17 19:57:25.886727968 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:25.896347  586929 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:57:25.896582  586929 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:57:25.899495  586929 out.go:179] * Using Docker driver with root privileges
	I1017 19:57:25.902349  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:57:25.902415  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:57:25.902427  586929 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:57:25.902506  586929 start.go:349] cluster config:
	{Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1017 19:57:25.905521  586929 out.go:179] * Starting "addons-948763" primary control-plane node in "addons-948763" cluster
	I1017 19:57:25.908290  586929 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:57:25.911243  586929 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:57:25.914053  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:25.914105  586929 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 19:57:25.914119  586929 cache.go:58] Caching tarball of preloaded images
	I1017 19:57:25.914148  586929 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:57:25.914202  586929 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 19:57:25.914212  586929 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 19:57:25.914546  586929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json ...
	I1017 19:57:25.914577  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json: {Name:mk6fbbe992c885173d02c12fb732ce7886450d0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:25.930173  586929 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:57:25.930322  586929 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:57:25.930342  586929 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory, skipping pull
	I1017 19:57:25.930347  586929 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in cache, skipping pull
	I1017 19:57:25.930361  586929 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 as a tarball
	I1017 19:57:25.930366  586929 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from local cache
	I1017 19:57:43.981774  586929 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 from cached tarball
	I1017 19:57:43.981817  586929 cache.go:232] Successfully downloaded all kic artifacts
	I1017 19:57:43.981848  586929 start.go:360] acquireMachinesLock for addons-948763: {Name:mk68e71e96d7a5ca2beb265f792c62d71f65313a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 19:57:43.982507  586929 start.go:364] duration metric: took 633.174µs to acquireMachinesLock for "addons-948763"
	I1017 19:57:43.982550  586929 start.go:93] Provisioning new machine with config: &{Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:57:43.982652  586929 start.go:125] createHost starting for "" (driver="docker")
	I1017 19:57:43.986131  586929 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1017 19:57:43.986379  586929 start.go:159] libmachine.API.Create for "addons-948763" (driver="docker")
	I1017 19:57:43.986438  586929 client.go:168] LocalClient.Create starting
	I1017 19:57:43.986563  586929 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 19:57:45.470574  586929 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 19:57:45.610855  586929 cli_runner.go:164] Run: docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 19:57:45.628013  586929 cli_runner.go:211] docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 19:57:45.628104  586929 network_create.go:284] running [docker network inspect addons-948763] to gather additional debugging logs...
	I1017 19:57:45.628136  586929 cli_runner.go:164] Run: docker network inspect addons-948763
	W1017 19:57:45.643559  586929 cli_runner.go:211] docker network inspect addons-948763 returned with exit code 1
	I1017 19:57:45.643589  586929 network_create.go:287] error running [docker network inspect addons-948763]: docker network inspect addons-948763: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-948763 not found
	I1017 19:57:45.643602  586929 network_create.go:289] output of [docker network inspect addons-948763]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-948763 not found
	
	** /stderr **
	I1017 19:57:45.643703  586929 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:57:45.659514  586929 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d41570}
	I1017 19:57:45.659556  586929 network_create.go:124] attempt to create docker network addons-948763 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1017 19:57:45.659620  586929 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-948763 addons-948763
	I1017 19:57:45.718108  586929 network_create.go:108] docker network addons-948763 192.168.49.0/24 created
	I1017 19:57:45.718144  586929 kic.go:121] calculated static IP "192.168.49.2" for the "addons-948763" container
	I1017 19:57:45.718231  586929 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 19:57:45.737126  586929 cli_runner.go:164] Run: docker volume create addons-948763 --label name.minikube.sigs.k8s.io=addons-948763 --label created_by.minikube.sigs.k8s.io=true
	I1017 19:57:45.755786  586929 oci.go:103] Successfully created a docker volume addons-948763
	I1017 19:57:45.755877  586929 cli_runner.go:164] Run: docker run --rm --name addons-948763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --entrypoint /usr/bin/test -v addons-948763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 19:57:47.900151  586929 cli_runner.go:217] Completed: docker run --rm --name addons-948763-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --entrypoint /usr/bin/test -v addons-948763:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib: (2.14421258s)
	I1017 19:57:47.900186  586929 oci.go:107] Successfully prepared a docker volume addons-948763
	I1017 19:57:47.900207  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:47.900226  586929 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 19:57:47.900296  586929 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-948763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	I1017 19:57:52.313822  586929 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-948763:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.413484379s)
	I1017 19:57:52.313853  586929 kic.go:203] duration metric: took 4.413624583s to extract preloaded images to volume ...
	W1017 19:57:52.313986  586929 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 19:57:52.314094  586929 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 19:57:52.368818  586929 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-948763 --name addons-948763 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-948763 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-948763 --network addons-948763 --ip 192.168.49.2 --volume addons-948763:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 19:57:52.641407  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Running}}
	I1017 19:57:52.661063  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:52.687149  586929 cli_runner.go:164] Run: docker exec addons-948763 stat /var/lib/dpkg/alternatives/iptables
	I1017 19:57:52.739807  586929 oci.go:144] the created container "addons-948763" has a running status.
	I1017 19:57:52.739836  586929 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa...
	I1017 19:57:53.302878  586929 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 19:57:53.325169  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:53.347185  586929 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 19:57:53.347206  586929 kic_runner.go:114] Args: [docker exec --privileged addons-948763 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 19:57:53.399407  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:57:53.435368  586929 machine.go:93] provisionDockerMachine start ...
	I1017 19:57:53.435472  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.464716  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.465037  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.465051  586929 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 19:57:53.646964  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-948763
	
	I1017 19:57:53.646987  586929 ubuntu.go:182] provisioning hostname "addons-948763"
	I1017 19:57:53.647052  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.670147  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.670453  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.670467  586929 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-948763 && echo "addons-948763" | sudo tee /etc/hostname
	I1017 19:57:53.838317  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-948763
	
	I1017 19:57:53.838404  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:53.856599  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:53.856911  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:53.856932  586929 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-948763' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-948763/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-948763' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 19:57:54.012471  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 19:57:54.012500  586929 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 19:57:54.012547  586929 ubuntu.go:190] setting up certificates
	I1017 19:57:54.012558  586929 provision.go:84] configureAuth start
	I1017 19:57:54.012628  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.030502  586929 provision.go:143] copyHostCerts
	I1017 19:57:54.030606  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 19:57:54.030737  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 19:57:54.030797  586929 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 19:57:54.030849  586929 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.addons-948763 san=[127.0.0.1 192.168.49.2 addons-948763 localhost minikube]
	I1017 19:57:54.229175  586929 provision.go:177] copyRemoteCerts
	I1017 19:57:54.229244  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 19:57:54.229284  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.246366  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.350918  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 19:57:54.368528  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 19:57:54.385836  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 19:57:54.404762  586929 provision.go:87] duration metric: took 392.177291ms to configureAuth
	I1017 19:57:54.404791  586929 ubuntu.go:206] setting minikube options for container-runtime
	I1017 19:57:54.405008  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:57:54.405120  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.422221  586929 main.go:141] libmachine: Using SSH client type: native
	I1017 19:57:54.422534  586929 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1017 19:57:54.422557  586929 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 19:57:54.681007  586929 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 19:57:54.681072  586929 machine.go:96] duration metric: took 1.245682786s to provisionDockerMachine
	I1017 19:57:54.681097  586929 client.go:171] duration metric: took 10.694646314s to LocalClient.Create
	I1017 19:57:54.681129  586929 start.go:167] duration metric: took 10.694750676s to libmachine.API.Create "addons-948763"
	I1017 19:57:54.681167  586929 start.go:293] postStartSetup for "addons-948763" (driver="docker")
	I1017 19:57:54.681192  586929 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 19:57:54.681300  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 19:57:54.681414  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.697811  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.798991  586929 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 19:57:54.802126  586929 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 19:57:54.802153  586929 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 19:57:54.802164  586929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 19:57:54.802232  586929 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 19:57:54.802261  586929 start.go:296] duration metric: took 121.073817ms for postStartSetup
	I1017 19:57:54.802574  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.819935  586929 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/config.json ...
	I1017 19:57:54.820203  586929 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 19:57:54.820252  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.839513  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.940177  586929 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 19:57:54.945018  586929 start.go:128] duration metric: took 10.96235005s to createHost
	I1017 19:57:54.945040  586929 start.go:83] releasing machines lock for "addons-948763", held for 10.962512777s
	I1017 19:57:54.945110  586929 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-948763
	I1017 19:57:54.965307  586929 ssh_runner.go:195] Run: cat /version.json
	I1017 19:57:54.965334  586929 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 19:57:54.965361  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.965397  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:57:54.983334  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:54.992859  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:57:55.194067  586929 ssh_runner.go:195] Run: systemctl --version
	I1017 19:57:55.200181  586929 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 19:57:55.234975  586929 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 19:57:55.239147  586929 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 19:57:55.239270  586929 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 19:57:55.266110  586929 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 19:57:55.266197  586929 start.go:495] detecting cgroup driver to use...
	I1017 19:57:55.266244  586929 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 19:57:55.266319  586929 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 19:57:55.283076  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 19:57:55.295253  586929 docker.go:218] disabling cri-docker service (if available) ...
	I1017 19:57:55.295339  586929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 19:57:55.312297  586929 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 19:57:55.331465  586929 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 19:57:55.449400  586929 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 19:57:55.579257  586929 docker.go:234] disabling docker service ...
	I1017 19:57:55.579353  586929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 19:57:55.599908  586929 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 19:57:55.612975  586929 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 19:57:55.723423  586929 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 19:57:55.846699  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 19:57:55.859307  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 19:57:55.872860  586929 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 19:57:55.872934  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.881564  586929 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 19:57:55.881676  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.891070  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.900301  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.909690  586929 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 19:57:55.918467  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.927594  586929 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.941060  586929 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 19:57:55.949741  586929 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 19:57:55.957161  586929 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 19:57:55.964345  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:57:56.076303  586929 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 19:57:56.201147  586929 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 19:57:56.201289  586929 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 19:57:56.205067  586929 start.go:563] Will wait 60s for crictl version
	I1017 19:57:56.205176  586929 ssh_runner.go:195] Run: which crictl
	I1017 19:57:56.208507  586929 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 19:57:56.232904  586929 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 19:57:56.233050  586929 ssh_runner.go:195] Run: crio --version
	I1017 19:57:56.262059  586929 ssh_runner.go:195] Run: crio --version
	I1017 19:57:56.293819  586929 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 19:57:56.296660  586929 cli_runner.go:164] Run: docker network inspect addons-948763 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 19:57:56.311224  586929 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 19:57:56.314706  586929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:57:56.324027  586929 kubeadm.go:883] updating cluster {Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 19:57:56.324150  586929 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 19:57:56.324213  586929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:57:56.357103  586929 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:57:56.357128  586929 crio.go:433] Images already preloaded, skipping extraction
	I1017 19:57:56.357188  586929 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 19:57:56.382126  586929 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 19:57:56.382148  586929 cache_images.go:85] Images are preloaded, skipping loading
	I1017 19:57:56.382156  586929 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 19:57:56.382286  586929 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-948763 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 19:57:56.382384  586929 ssh_runner.go:195] Run: crio config
	I1017 19:57:56.433873  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:57:56.433897  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:57:56.433916  586929 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 19:57:56.433967  586929 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-948763 NodeName:addons-948763 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 19:57:56.434187  586929 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-948763"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 19:57:56.434287  586929 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 19:57:56.442274  586929 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 19:57:56.442386  586929 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 19:57:56.450109  586929 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 19:57:56.463517  586929 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 19:57:56.475842  586929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1017 19:57:56.489036  586929 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 19:57:56.492701  586929 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 19:57:56.502450  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:57:56.614617  586929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:57:56.636901  586929 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763 for IP: 192.168.49.2
	I1017 19:57:56.636970  586929 certs.go:195] generating shared ca certs ...
	I1017 19:57:56.637004  586929 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:56.637188  586929 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 19:57:57.401074  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt ...
	I1017 19:57:57.401108  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt: {Name:mk2284f82e0c9b99696c8a1614a44d6a7619b033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.401320  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key ...
	I1017 19:57:57.401334  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key: {Name:mkab7b9ac9299104fd96211f14f1d513b7f9d51a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.402055  586929 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 19:57:57.830396  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt ...
	I1017 19:57:57.830432  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt: {Name:mk761e132cc40987111a33bc312c624c4a89dd04 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.831274  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key ...
	I1017 19:57:57.831291  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key: {Name:mk1adffd7f636109c810327716c0450bc669be52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:57.831943  586929 certs.go:257] generating profile certs ...
	I1017 19:57:57.832008  586929 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key
	I1017 19:57:57.832025  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt with IP's: []
	I1017 19:57:58.495006  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt ...
	I1017 19:57:58.495038  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: {Name:mk96d9941e1d00385c50ab5d03c51c54ebdddb8b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:58.495235  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key ...
	I1017 19:57:58.495247  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.key: {Name:mk2cabde2d08f0807e2862c2336fc0029f2db9c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:58.495349  586929 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b
	I1017 19:57:58.495369  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1017 19:57:59.195244  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b ...
	I1017 19:57:59.195276  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b: {Name:mk01618b660bda5b796ad0f6a57510890dc176a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.195451  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b ...
	I1017 19:57:59.195465  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b: {Name:mk73a701fd2daa299da09e2f89352133a36098b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.195550  586929 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt.a7b5675b -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt
	I1017 19:57:59.195635  586929 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key.a7b5675b -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key
	I1017 19:57:59.195688  586929 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key
	I1017 19:57:59.195703  586929 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt with IP's: []
	I1017 19:57:59.686172  586929 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt ...
	I1017 19:57:59.686204  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt: {Name:mk28f3e286380b2d4ab16eb013ce090dbea224be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.686386  586929 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key ...
	I1017 19:57:59.686404  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key: {Name:mkbe8ee3c63df4d589cace96d9bb321c55126e44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:59.686608  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 19:57:59.686656  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 19:57:59.686688  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 19:57:59.686725  586929 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 19:57:59.687370  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 19:57:59.716100  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 19:57:59.740064  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 19:57:59.757780  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 19:57:59.775490  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1017 19:57:59.792592  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 19:57:59.809919  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 19:57:59.829095  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 19:57:59.846611  586929 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 19:57:59.863825  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 19:57:59.876942  586929 ssh_runner.go:195] Run: openssl version
	I1017 19:57:59.883017  586929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 19:57:59.891500  586929 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.895188  586929 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.895260  586929 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 19:57:59.940797  586929 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 19:57:59.949177  586929 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 19:57:59.952860  586929 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 19:57:59.952958  586929 kubeadm.go:400] StartCluster: {Name:addons-948763 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-948763 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:57:59.953047  586929 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 19:57:59.953104  586929 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 19:57:59.982804  586929 cri.go:89] found id: ""
	I1017 19:57:59.982887  586929 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 19:57:59.991051  586929 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 19:57:59.998750  586929 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 19:57:59.998819  586929 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 19:58:00.013977  586929 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 19:58:00.013997  586929 kubeadm.go:157] found existing configuration files:
	
	I1017 19:58:00.014060  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 19:58:00.053893  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 19:58:00.053975  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 19:58:00.085185  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 19:58:00.121301  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 19:58:00.123280  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 19:58:00.152126  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 19:58:00.178258  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 19:58:00.178358  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 19:58:00.196846  586929 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 19:58:00.208420  586929 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 19:58:00.208531  586929 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 19:58:00.243422  586929 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 19:58:00.333539  586929 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 19:58:00.334052  586929 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 19:58:00.389055  586929 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 19:58:00.389130  586929 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 19:58:00.389168  586929 kubeadm.go:318] OS: Linux
	I1017 19:58:00.389218  586929 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 19:58:00.389269  586929 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 19:58:00.389320  586929 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 19:58:00.389371  586929 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 19:58:00.389433  586929 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 19:58:00.389489  586929 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 19:58:00.389538  586929 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 19:58:00.389589  586929 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 19:58:00.389642  586929 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 19:58:00.483846  586929 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 19:58:00.483996  586929 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 19:58:00.484100  586929 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 19:58:00.494920  586929 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 19:58:00.501762  586929 out.go:252]   - Generating certificates and keys ...
	I1017 19:58:00.501882  586929 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 19:58:00.501965  586929 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 19:58:01.730273  586929 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 19:58:01.930112  586929 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 19:58:02.488393  586929 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 19:58:03.378158  586929 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 19:58:03.784393  586929 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 19:58:03.784571  586929 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-948763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:58:04.260196  586929 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 19:58:04.260493  586929 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-948763 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1017 19:58:04.656507  586929 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 19:58:05.129321  586929 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 19:58:05.703681  586929 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 19:58:05.703968  586929 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 19:58:06.027830  586929 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 19:58:06.251642  586929 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 19:58:06.587003  586929 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 19:58:07.654619  586929 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 19:58:07.735603  586929 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 19:58:07.736202  586929 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 19:58:07.738758  586929 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 19:58:07.742127  586929 out.go:252]   - Booting up control plane ...
	I1017 19:58:07.742226  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 19:58:07.742307  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 19:58:07.742386  586929 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 19:58:07.756649  586929 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 19:58:07.756955  586929 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 19:58:07.764413  586929 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 19:58:07.764683  586929 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 19:58:07.764871  586929 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 19:58:07.898927  586929 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 19:58:07.899058  586929 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 19:58:09.402917  586929 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.504039545s
	I1017 19:58:09.406409  586929 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 19:58:09.406514  586929 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1017 19:58:09.406770  586929 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 19:58:09.406866  586929 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 19:58:13.031065  586929 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.62421357s
	I1017 19:58:14.786817  586929 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.380332439s
	I1017 19:58:15.909100  586929 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.502439573s
	I1017 19:58:15.929968  586929 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 19:58:15.943398  586929 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 19:58:15.961417  586929 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 19:58:15.961652  586929 kubeadm.go:318] [mark-control-plane] Marking the node addons-948763 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 19:58:15.973971  586929 kubeadm.go:318] [bootstrap-token] Using token: lbpa4m.5ssgzkitrp191svg
	I1017 19:58:15.977116  586929 out.go:252]   - Configuring RBAC rules ...
	I1017 19:58:15.977274  586929 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 19:58:15.983811  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 19:58:15.994655  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 19:58:15.999937  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 19:58:16.007767  586929 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 19:58:16.020786  586929 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 19:58:16.318378  586929 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 19:58:16.759558  586929 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 19:58:17.316428  586929 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 19:58:17.318999  586929 kubeadm.go:318] 
	I1017 19:58:17.319084  586929 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 19:58:17.319097  586929 kubeadm.go:318] 
	I1017 19:58:17.319212  586929 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 19:58:17.319224  586929 kubeadm.go:318] 
	I1017 19:58:17.319267  586929 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 19:58:17.319332  586929 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 19:58:17.319415  586929 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 19:58:17.319435  586929 kubeadm.go:318] 
	I1017 19:58:17.319502  586929 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 19:58:17.319511  586929 kubeadm.go:318] 
	I1017 19:58:17.319591  586929 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 19:58:17.319600  586929 kubeadm.go:318] 
	I1017 19:58:17.319655  586929 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 19:58:17.319748  586929 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 19:58:17.319831  586929 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 19:58:17.319840  586929 kubeadm.go:318] 
	I1017 19:58:17.319932  586929 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 19:58:17.320019  586929 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 19:58:17.320027  586929 kubeadm.go:318] 
	I1017 19:58:17.320135  586929 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token lbpa4m.5ssgzkitrp191svg \
	I1017 19:58:17.320250  586929 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 19:58:17.320272  586929 kubeadm.go:318] 	--control-plane 
	I1017 19:58:17.320277  586929 kubeadm.go:318] 
	I1017 19:58:17.320366  586929 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 19:58:17.320371  586929 kubeadm.go:318] 
	I1017 19:58:17.320466  586929 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token lbpa4m.5ssgzkitrp191svg \
	I1017 19:58:17.320574  586929 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 19:58:17.323632  586929 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 19:58:17.323900  586929 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 19:58:17.324033  586929 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 19:58:17.324047  586929 cni.go:84] Creating CNI manager for ""
	I1017 19:58:17.324055  586929 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:58:17.327224  586929 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 19:58:17.330207  586929 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 19:58:17.334792  586929 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 19:58:17.334813  586929 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 19:58:17.349608  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 19:58:17.675142  586929 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 19:58:17.675277  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:17.675371  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-948763 minikube.k8s.io/updated_at=2025_10_17T19_58_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=addons-948763 minikube.k8s.io/primary=true
	I1017 19:58:17.836779  586929 ops.go:34] apiserver oom_adj: -16
	I1017 19:58:17.836971  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:18.337548  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:18.837069  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:19.337155  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:19.837672  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:20.337067  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:20.837653  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.337114  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.837131  586929 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 19:58:21.941228  586929 kubeadm.go:1113] duration metric: took 4.26599672s to wait for elevateKubeSystemPrivileges
	I1017 19:58:21.941257  586929 kubeadm.go:402] duration metric: took 21.988302535s to StartCluster
	I1017 19:58:21.941274  586929 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:58:21.941398  586929 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:58:21.941777  586929 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:58:21.941990  586929 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 19:58:21.942177  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 19:58:21.942446  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:58:21.942554  586929 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1017 19:58:21.942633  586929 addons.go:69] Setting yakd=true in profile "addons-948763"
	I1017 19:58:21.942646  586929 addons.go:238] Setting addon yakd=true in "addons-948763"
	I1017 19:58:21.942669  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.943214  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.943764  586929 addons.go:69] Setting metrics-server=true in profile "addons-948763"
	I1017 19:58:21.943798  586929 addons.go:238] Setting addon metrics-server=true in "addons-948763"
	I1017 19:58:21.943831  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.943859  586929 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-948763"
	I1017 19:58:21.943877  586929 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-948763"
	I1017 19:58:21.943908  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.944257  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.944294  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948612  586929 addons.go:69] Setting registry=true in profile "addons-948763"
	I1017 19:58:21.949125  586929 addons.go:238] Setting addon registry=true in "addons-948763"
	I1017 19:58:21.949231  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.948784  586929 addons.go:69] Setting registry-creds=true in profile "addons-948763"
	I1017 19:58:21.949990  586929 addons.go:238] Setting addon registry-creds=true in "addons-948763"
	I1017 19:58:21.950061  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.948798  586929 addons.go:69] Setting storage-provisioner=true in profile "addons-948763"
	I1017 19:58:21.951082  586929 addons.go:238] Setting addon storage-provisioner=true in "addons-948763"
	I1017 19:58:21.951120  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.951512  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.956052  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948808  586929 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-948763"
	I1017 19:58:21.959543  586929 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-948763"
	I1017 19:58:21.959961  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.948815  586929 addons.go:69] Setting volcano=true in profile "addons-948763"
	I1017 19:58:21.960121  586929 addons.go:238] Setting addon volcano=true in "addons-948763"
	I1017 19:58:21.948822  586929 addons.go:69] Setting volumesnapshots=true in profile "addons-948763"
	I1017 19:58:21.949026  586929 out.go:179] * Verifying Kubernetes components...
	I1017 19:58:21.949041  586929 addons.go:69] Setting default-storageclass=true in profile "addons-948763"
	I1017 19:58:21.949049  586929 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-948763"
	I1017 19:58:21.949056  586929 addons.go:69] Setting cloud-spanner=true in profile "addons-948763"
	I1017 19:58:21.949062  586929 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-948763"
	I1017 19:58:21.949077  586929 addons.go:69] Setting ingress=true in profile "addons-948763"
	I1017 19:58:21.949083  586929 addons.go:69] Setting gcp-auth=true in profile "addons-948763"
	I1017 19:58:21.949089  586929 addons.go:69] Setting ingress-dns=true in profile "addons-948763"
	I1017 19:58:21.949101  586929 addons.go:69] Setting inspektor-gadget=true in profile "addons-948763"
	I1017 19:58:21.960188  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.987647  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.988242  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:21.988543  586929 addons.go:238] Setting addon ingress=true in "addons-948763"
	I1017 19:58:21.988604  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:21.989032  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.011320  586929 mustload.go:65] Loading cluster: addons-948763
	I1017 19:58:22.011666  586929 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 19:58:22.012066  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.013452  586929 addons.go:238] Setting addon volumesnapshots=true in "addons-948763"
	I1017 19:58:22.013569  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.014067  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.029966  586929 addons.go:238] Setting addon ingress-dns=true in "addons-948763"
	I1017 19:58:22.030036  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.030493  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.041784  586929 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 19:58:22.041919  586929 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-948763"
	I1017 19:58:22.042249  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.047974  586929 addons.go:238] Setting addon inspektor-gadget=true in "addons-948763"
	I1017 19:58:22.048041  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.048638  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.051070  586929 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-948763"
	I1017 19:58:22.051139  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.051615  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.060927  586929 addons.go:238] Setting addon cloud-spanner=true in "addons-948763"
	I1017 19:58:22.061038  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.065714  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.074655  586929 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-948763"
	I1017 19:58:22.074745  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.075364  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.083015  586929 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1017 19:58:22.087573  586929 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:58:22.087639  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1017 19:58:22.087733  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.097397  586929 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1017 19:58:22.100737  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1017 19:58:22.100812  586929 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1017 19:58:22.100935  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.140813  586929 out.go:179]   - Using image docker.io/registry:3.0.0
	I1017 19:58:22.172753  586929 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1017 19:58:22.218313  586929 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1017 19:58:22.221819  586929 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1017 19:58:22.222037  586929 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1017 19:58:22.222084  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1017 19:58:22.222188  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.228223  586929 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:58:22.228304  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1017 19:58:22.228399  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.253920  586929 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 19:58:22.257159  586929 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:58:22.257228  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 19:58:22.257330  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.257523  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1017 19:58:22.260426  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1017 19:58:22.260490  586929 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1017 19:58:22.260600  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.275279  586929 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1017 19:58:22.275717  586929 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 19:58:22.275944  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1017 19:58:22.275957  586929 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1017 19:58:22.276025  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.282732  586929 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-948763"
	I1017 19:58:22.282847  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.283446  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.300036  586929 addons.go:238] Setting addon default-storageclass=true in "addons-948763"
	I1017 19:58:22.300124  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.300573  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:22.350184  586929 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:58:22.350240  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1017 19:58:22.350314  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.357385  586929 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1017 19:58:22.360308  586929 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1017 19:58:22.360337  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1017 19:58:22.360403  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.350130  586929 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1017 19:58:22.369499  586929 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:58:22.369522  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1017 19:58:22.369612  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.391000  586929 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1017 19:58:22.391757  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.394556  586929 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1017 19:58:22.394577  586929 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1017 19:58:22.394647  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.411953  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1017 19:58:22.414873  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	W1017 19:58:22.419421  586929 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1017 19:58:22.420337  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:22.424931  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:58:22.425055  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1017 19:58:22.428077  586929 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:58:22.428095  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1017 19:58:22.428162  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.441156  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1017 19:58:22.447630  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1017 19:58:22.450544  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1017 19:58:22.455941  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1017 19:58:22.460742  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1017 19:58:22.464881  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1017 19:58:22.468121  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.469066  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.472560  586929 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1017 19:58:22.481607  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1017 19:58:22.481639  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1017 19:58:22.481705  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.485169  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.501611  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.537530  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.549164  586929 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1017 19:58:22.554234  586929 out.go:179]   - Using image docker.io/busybox:stable
	I1017 19:58:22.557373  586929 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:58:22.557396  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1017 19:58:22.557469  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.580588  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.582205  586929 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 19:58:22.582222  586929 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 19:58:22.582291  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:22.596886  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.599765  586929 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 19:58:22.611634  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.617988  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.647479  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.648344  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	W1017 19:58:22.652539  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.652582  586929 retry.go:31] will retry after 340.943544ms: ssh: handshake failed: EOF
	W1017 19:58:22.660379  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.660465  586929 retry.go:31] will retry after 335.738207ms: ssh: handshake failed: EOF
	I1017 19:58:22.672758  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	W1017 19:58:22.679369  586929 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1017 19:58:22.679396  586929 retry.go:31] will retry after 186.146509ms: ssh: handshake failed: EOF
	I1017 19:58:22.684423  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:22.692947  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:23.080284  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 19:58:23.110978  586929 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:23.111057  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1017 19:58:23.121402  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1017 19:58:23.123019  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1017 19:58:23.123071  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1017 19:58:23.150417  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1017 19:58:23.184922  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1017 19:58:23.208961  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:23.212452  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1017 19:58:23.212478  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1017 19:58:23.218884  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1017 19:58:23.218947  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1017 19:58:23.316516  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1017 19:58:23.316582  586929 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1017 19:58:23.319658  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1017 19:58:23.341362  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1017 19:58:23.341438  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1017 19:58:23.353678  586929 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1017 19:58:23.353751  586929 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1017 19:58:23.366332  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 19:58:23.368216  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1017 19:58:23.369954  586929 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1017 19:58:23.370016  586929 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1017 19:58:23.386237  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1017 19:58:23.386309  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1017 19:58:23.475393  586929 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.199637854s)
	I1017 19:58:23.476267  586929 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1017 19:58:23.476230  586929 node_ready.go:35] waiting up to 6m0s for node "addons-948763" to be "Ready" ...
	I1017 19:58:23.511950  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1017 19:58:23.511972  586929 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1017 19:58:23.549679  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1017 19:58:23.549699  586929 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1017 19:58:23.552427  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1017 19:58:23.607297  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1017 19:58:23.607371  586929 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1017 19:58:23.626276  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1017 19:58:23.626350  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1017 19:58:23.633875  586929 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:58:23.633941  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1017 19:58:23.642361  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1017 19:58:23.658307  586929 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:23.658376  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1017 19:58:23.760615  586929 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:58:23.760705  586929 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1017 19:58:23.762562  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1017 19:58:23.762619  586929 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1017 19:58:23.781797  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1017 19:58:23.781872  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1017 19:58:23.801328  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1017 19:58:23.863167  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:23.937053  586929 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:58:23.937117  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1017 19:58:23.941299  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1017 19:58:23.970915  586929 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1017 19:58:23.970982  586929 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1017 19:58:23.982225  586929 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-948763" context rescaled to 1 replicas
	I1017 19:58:24.171843  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1017 19:58:24.171872  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1017 19:58:24.175044  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1017 19:58:24.295379  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1017 19:58:24.295406  586929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1017 19:58:24.616212  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1017 19:58:24.616286  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1017 19:58:24.883205  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1017 19:58:24.883279  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1017 19:58:24.988524  586929 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1017 19:58:24.988599  586929 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1017 19:58:25.171854  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1017 19:58:25.492521  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:26.148830  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.068503777s)
	I1017 19:58:26.183740  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.062243157s)
	I1017 19:58:26.183849  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.03340934s)
	I1017 19:58:26.183921  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.998960927s)
	I1017 19:58:26.531127  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (3.322104332s)
	W1017 19:58:26.531348  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:26.531389  586929 retry.go:31] will retry after 373.27079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:26.531249  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (3.211518835s)
	I1017 19:58:26.531320  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.164916819s)
	I1017 19:58:26.905332  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:27.260711  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.892418558s)
	I1017 19:58:27.260992  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.708505056s)
	W1017 19:58:27.496714  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:28.298594  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.65615438s)
	I1017 19:58:28.298628  586929 addons.go:479] Verifying addon ingress=true in "addons-948763"
	I1017 19:58:28.298819  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.497414959s)
	I1017 19:58:28.298842  586929 addons.go:479] Verifying addon registry=true in "addons-948763"
	I1017 19:58:28.299340  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (4.436090428s)
	W1017 19:58:28.299380  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:58:28.299398  586929 retry.go:31] will retry after 127.671904ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1017 19:58:28.299480  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.35810973s)
	I1017 19:58:28.299495  586929 addons.go:479] Verifying addon metrics-server=true in "addons-948763"
	I1017 19:58:28.299541  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.124470517s)
	I1017 19:58:28.303009  586929 out.go:179] * Verifying registry addon...
	I1017 19:58:28.303145  586929 out.go:179] * Verifying ingress addon...
	I1017 19:58:28.304911  586929 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-948763 service yakd-dashboard -n yakd-dashboard
	
	I1017 19:58:28.307714  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1017 19:58:28.308421  586929 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1017 19:58:28.325328  586929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:58:28.325356  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:28.325640  586929 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1017 19:58:28.325657  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:28.427716  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1017 19:58:28.726182  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.554235945s)
	I1017 19:58:28.726211  586929 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-948763"
	I1017 19:58:28.726416  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.821057124s)
	W1017 19:58:28.726444  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:28.726463  586929 retry.go:31] will retry after 207.480469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:28.730788  586929 out.go:179] * Verifying csi-hostpath-driver addon...
	I1017 19:58:28.734608  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1017 19:58:28.751494  586929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:58:28.751520  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:28.812986  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:28.813229  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:28.935006  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:29.238697  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:29.313015  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:29.314058  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:29.738819  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:29.812479  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:29.813821  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:58:29.981247  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:30.059292  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1017 19:58:30.059397  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:30.086753  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:30.200886  586929 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1017 19:58:30.214132  586929 addons.go:238] Setting addon gcp-auth=true in "addons-948763"
	I1017 19:58:30.214232  586929 host.go:66] Checking if "addons-948763" exists ...
	I1017 19:58:30.214728  586929 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 19:58:30.232292  586929 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1017 19:58:30.232345  586929 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 19:58:30.238956  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:30.251690  586929 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 19:58:30.311985  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:30.312138  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:30.738110  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:30.812276  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:30.812379  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.238739  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:31.264444  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.836681738s)
	I1017 19:58:31.264588  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.329545594s)
	W1017 19:58:31.264628  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:31.264644  586929 retry.go:31] will retry after 495.562147ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:31.264679  586929 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.032367954s)
	I1017 19:58:31.267893  586929 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1017 19:58:31.270773  586929 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1017 19:58:31.273610  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1017 19:58:31.273629  586929 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1017 19:58:31.286595  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1017 19:58:31.286661  586929 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1017 19:58:31.299611  586929 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:58:31.299633  586929 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1017 19:58:31.312611  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:31.313575  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.317298  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1017 19:58:31.744598  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:31.760635  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:31.830896  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:31.831396  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:31.845961  586929 addons.go:479] Verifying addon gcp-auth=true in "addons-948763"
	I1017 19:58:31.849235  586929 out.go:179] * Verifying gcp-auth addon...
	I1017 19:58:31.852402  586929 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1017 19:58:31.864258  586929 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1017 19:58:31.864281  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:32.238561  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:32.313317  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:32.314123  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:32.355659  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:32.481554  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	W1017 19:58:32.604265  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:32.604301  586929 retry.go:31] will retry after 961.088268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:32.739258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:32.812661  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:32.812797  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:32.855366  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:33.238389  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:33.311892  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:33.312262  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:33.356653  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:33.565681  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:33.738580  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:33.812241  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:33.813446  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:33.855349  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:34.239086  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:34.311630  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:34.313306  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:34.356317  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:34.378355  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:34.378388  586929 retry.go:31] will retry after 760.335078ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:34.737815  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:34.811772  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:34.812290  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:34.856179  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:34.979814  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:35.138993  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:35.238857  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:35.313744  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:35.314234  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:35.356026  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:35.740268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:35.813486  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:35.814175  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:35.856115  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:35.957027  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:35.957059  586929 retry.go:31] will retry after 2.24202928s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:36.237702  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:36.311862  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:36.311940  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:36.355840  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:36.738122  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:36.812688  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:36.815940  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:36.862926  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:36.980803  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:37.238284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:37.312849  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:37.313246  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:37.355869  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:37.738289  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:37.812431  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:37.812916  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:37.855545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:38.199688  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:38.238581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:38.312106  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:38.312139  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:38.356091  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:38.737680  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:38.813284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:38.814091  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:38.857236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:39.023411  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:39.023443  586929 retry.go:31] will retry after 2.002306756s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:39.238435  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:39.312586  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:39.312920  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:39.355557  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:39.480352  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:39.737657  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:39.812135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:39.812282  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:39.856069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:40.238058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:40.312249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:40.312361  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:40.356145  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:40.738262  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:40.812939  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:40.813298  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:40.855863  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:41.025875  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:41.238025  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:41.314627  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:41.315083  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:41.356189  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:41.480919  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:41.738866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:41.813325  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:41.813949  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1017 19:58:41.842789  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:41.842820  586929 retry.go:31] will retry after 5.091243261s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:41.855535  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:42.239423  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:42.312138  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:42.312362  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:42.355340  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:42.738480  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:42.811845  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:42.811939  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:42.855843  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:43.237784  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:43.311517  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:43.311789  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:43.355320  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:43.738337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:43.812081  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:43.812376  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:43.855391  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:43.980450  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:44.238279  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:44.312612  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:44.312980  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:44.356130  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:44.738832  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:44.812130  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:44.812211  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:44.856181  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:45.238712  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:45.313329  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:45.313485  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:45.356011  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:45.738670  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:45.811793  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:45.812005  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:45.855800  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:45.980628  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:46.238231  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:46.312485  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:46.312638  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:46.355599  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:46.738243  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:46.812232  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:46.812381  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:46.855377  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:46.934489  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:47.238581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:47.313284  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:47.313525  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:47.355857  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:47.739343  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1017 19:58:47.764981  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:47.765015  586929 retry.go:31] will retry after 9.407508894s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:47.812186  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:47.812391  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:47.856027  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:48.238250  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:48.312559  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:48.312642  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:48.356152  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:48.479926  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:48.737989  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:48.812167  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:48.812341  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:48.855803  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:49.238303  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:49.312346  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:49.312977  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:49.355829  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:49.737701  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:49.812043  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:49.812497  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:49.855347  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:50.238079  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:50.311932  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:50.312232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:50.355962  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:50.480587  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:50.737859  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:50.811948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:50.812146  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:50.855948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:51.238584  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:51.311554  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:51.311948  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:51.355674  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:51.738386  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:51.812126  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:51.812248  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:51.855712  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:52.237379  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:52.312875  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:52.312952  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:52.355807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:52.480745  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:52.737676  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:52.812763  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:52.812981  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:52.855892  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:53.238304  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:53.312382  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:53.312515  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:53.356807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:53.738199  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:53.812663  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:53.812910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:53.856123  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:54.238369  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:54.311898  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:54.312376  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:54.356279  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:54.738669  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:54.812546  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:54.812686  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:54.855752  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:54.980826  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:55.238403  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:55.312949  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:55.313167  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:55.356240  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:55.738119  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:55.811545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:55.811746  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:55.855537  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:56.238231  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:56.312493  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:56.312737  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:56.355925  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:56.737755  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:56.811631  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:56.811924  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:56.855709  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:57.173686  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:58:57.241877  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:57.313278  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:57.314309  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:57.355181  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:57.480310  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:57.738564  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:57.812794  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:57.813734  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:57.855510  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:58.007461  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:58.007495  586929 retry.go:31] will retry after 11.364388122s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:58:58.238843  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:58.312818  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:58.313238  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:58.356183  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:58.737814  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:58.812866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:58.813020  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:58.855617  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:58:59.237698  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:59.311984  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:59.312144  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:59.355787  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:58:59.480924  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:58:59.738412  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:58:59.811398  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:58:59.811833  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:58:59.856475  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:00.239225  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:00.318909  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:00.320580  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:00.356704  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:00.738756  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:00.811520  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:00.811910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:00.855599  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:01.237828  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:01.312573  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:01.313035  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:01.355740  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:01.737716  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:01.812436  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:01.812518  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:01.855811  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1017 19:59:01.980449  586929 node_ready.go:57] node "addons-948763" has "Ready":"False" status (will retry)
	I1017 19:59:02.242777  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:02.365244  586929 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1017 19:59:02.365275  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:02.366866  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:02.407612  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:02.482348  586929 node_ready.go:49] node "addons-948763" is "Ready"
	I1017 19:59:02.482429  586929 node_ready.go:38] duration metric: took 39.005250438s for node "addons-948763" to be "Ready" ...
	I1017 19:59:02.482458  586929 api_server.go:52] waiting for apiserver process to appear ...
	I1017 19:59:02.482561  586929 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 19:59:02.497489  586929 api_server.go:72] duration metric: took 40.555471355s to wait for apiserver process to appear ...
	I1017 19:59:02.497564  586929 api_server.go:88] waiting for apiserver healthz status ...
	I1017 19:59:02.497598  586929 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 19:59:02.510989  586929 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 19:59:02.516902  586929 api_server.go:141] control plane version: v1.34.1
	I1017 19:59:02.516983  586929 api_server.go:131] duration metric: took 19.399064ms to wait for apiserver health ...
	I1017 19:59:02.517006  586929 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 19:59:02.537015  586929 system_pods.go:59] 19 kube-system pods found
	I1017 19:59:02.537108  586929 system_pods.go:61] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.537133  586929 system_pods.go:61] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending
	I1017 19:59:02.537193  586929 system_pods.go:61] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.537217  586929 system_pods.go:61] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.537249  586929 system_pods.go:61] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.537280  586929 system_pods.go:61] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.537305  586929 system_pods.go:61] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.537324  586929 system_pods.go:61] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.537358  586929 system_pods.go:61] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.537380  586929 system_pods.go:61] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.537401  586929 system_pods.go:61] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.537442  586929 system_pods.go:61] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.537477  586929 system_pods.go:61] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending
	I1017 19:59:02.537497  586929 system_pods.go:61] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending
	I1017 19:59:02.537534  586929 system_pods.go:61] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.537559  586929 system_pods.go:61] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending
	I1017 19:59:02.537583  586929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.537619  586929 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending
	I1017 19:59:02.537644  586929 system_pods.go:61] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.537665  586929 system_pods.go:74] duration metric: took 20.638589ms to wait for pod list to return data ...
	I1017 19:59:02.537704  586929 default_sa.go:34] waiting for default service account to be created ...
	I1017 19:59:02.582739  586929 default_sa.go:45] found service account: "default"
	I1017 19:59:02.582815  586929 default_sa.go:55] duration metric: took 45.086718ms for default service account to be created ...
	I1017 19:59:02.582840  586929 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 19:59:02.637256  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:02.637390  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.637415  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending
	I1017 19:59:02.637468  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.637494  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.637534  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.637557  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.637577  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.637613  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.637639  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.637662  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.637697  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.637725  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.637747  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending
	I1017 19:59:02.637793  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending
	I1017 19:59:02.637822  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.637844  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending
	I1017 19:59:02.637889  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.637910  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending
	I1017 19:59:02.637945  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.637982  586929 retry.go:31] will retry after 271.325382ms: missing components: kube-dns
	I1017 19:59:02.744002  586929 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1017 19:59:02.744070  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:02.813491  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:02.813918  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:02.857997  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:02.934861  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:02.934907  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:02.934935  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:02.934950  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:02.934956  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending
	I1017 19:59:02.934960  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:02.934983  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:02.934996  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:02.935001  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:02.935020  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending
	I1017 19:59:02.935031  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:02.935036  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:02.935056  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:02.935069  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:02.935076  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:02.935173  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:02.935189  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:02.935197  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.935222  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:02.935233  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending
	I1017 19:59:02.935264  586929 retry.go:31] will retry after 375.666875ms: missing components: kube-dns
	I1017 19:59:03.241332  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:03.352532  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:03.352910  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:03.360779  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:03.360818  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:03.360836  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:03.360863  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:03.360876  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:03.360881  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:03.360887  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:03.360895  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:03.360899  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:03.360925  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:03.360937  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:03.360942  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:03.360957  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:03.360974  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:03.360981  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:03.361007  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:03.361019  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:03.361026  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.361037  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.361045  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:59:03.361078  586929 retry.go:31] will retry after 480.266829ms: missing components: kube-dns
	I1017 19:59:03.445079  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:03.739402  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:03.813331  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:03.813410  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:03.846588  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:03.846624  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 19:59:03.846660  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:03.846673  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:03.846681  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:03.846690  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:03.846696  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:03.846715  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:03.846726  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:03.846733  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:03.846752  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:03.846763  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:03.846770  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:03.846790  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:03.846803  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:03.846810  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:03.846836  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:03.846844  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.846867  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:03.846880  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 19:59:03.846897  586929 retry.go:31] will retry after 571.985018ms: missing components: kube-dns
	I1017 19:59:03.856399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:04.238332  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:04.339826  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:04.339902  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:04.440267  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:04.441028  586929 system_pods.go:86] 19 kube-system pods found
	I1017 19:59:04.441055  586929 system_pods.go:89] "coredns-66bc5c9577-f4b6j" [b384cc70-000f-46de-bbba-fe79d28af1f6] Running
	I1017 19:59:04.441092  586929 system_pods.go:89] "csi-hostpath-attacher-0" [6b206917-3384-4c3d-8dc7-767ab351d7a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1017 19:59:04.441107  586929 system_pods.go:89] "csi-hostpath-resizer-0" [f3c93a3e-ba41-4104-8b40-aebb419fbce3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1017 19:59:04.441117  586929 system_pods.go:89] "csi-hostpathplugin-7b6l4" [d059aa30-13b1-4d63-b6e8-95c80cf017fa] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1017 19:59:04.441128  586929 system_pods.go:89] "etcd-addons-948763" [c300e63f-c827-4522-8a87-9623fd048e18] Running
	I1017 19:59:04.441133  586929 system_pods.go:89] "kindnet-kr7qd" [d22c2afb-a4f5-4b60-a415-e1f9a2e674d2] Running
	I1017 19:59:04.441138  586929 system_pods.go:89] "kube-apiserver-addons-948763" [d67660be-614b-48ff-8f37-b63c5fa38c49] Running
	I1017 19:59:04.441171  586929 system_pods.go:89] "kube-controller-manager-addons-948763" [68b24581-d277-4a42-b5fd-d4b293528ed1] Running
	I1017 19:59:04.441187  586929 system_pods.go:89] "kube-ingress-dns-minikube" [2170cb76-7102-42a8-90ec-62d5295ffe7a] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1017 19:59:04.441192  586929 system_pods.go:89] "kube-proxy-qtcs2" [8dae142b-0c07-4cc3-b7d2-905e1e65a44e] Running
	I1017 19:59:04.441197  586929 system_pods.go:89] "kube-scheduler-addons-948763" [16e00367-dc26-4a16-a290-cd365371c3e8] Running
	I1017 19:59:04.441203  586929 system_pods.go:89] "metrics-server-85b7d694d7-h9xx7" [9d47071b-3e5c-41fb-bfff-0969162955d3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1017 19:59:04.441215  586929 system_pods.go:89] "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1017 19:59:04.441224  586929 system_pods.go:89] "registry-6b586f9694-pj8zh" [99bf8ff0-c52c-4ec3-aefb-2542f6746772] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1017 19:59:04.441260  586929 system_pods.go:89] "registry-creds-764b6fb674-w5f6g" [e0294cdc-575e-4386-8522-945e51a2b371] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1017 19:59:04.441271  586929 system_pods.go:89] "registry-proxy-5jjqn" [7952bd47-40e6-4e46-8637-08e3bdb52e92] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1017 19:59:04.441286  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-89gvs" [5ebe8a3f-d070-4142-9275-f962b7524d7e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:04.441294  586929 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pp66v" [c2f4eeaf-127f-432b-b23f-40126f6b41bf] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1017 19:59:04.441302  586929 system_pods.go:89] "storage-provisioner" [f7d00427-85c5-41fe-a01e-2c33e8b4e2dd] Running
	I1017 19:59:04.441311  586929 system_pods.go:126] duration metric: took 1.858452185s to wait for k8s-apps to be running ...
	I1017 19:59:04.441336  586929 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 19:59:04.441418  586929 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 19:59:04.456100  586929 system_svc.go:56] duration metric: took 14.75464ms WaitForService to wait for kubelet
	I1017 19:59:04.456142  586929 kubeadm.go:586] duration metric: took 42.514128593s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 19:59:04.456161  586929 node_conditions.go:102] verifying NodePressure condition ...
	I1017 19:59:04.459033  586929 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 19:59:04.459066  586929 node_conditions.go:123] node cpu capacity is 2
	I1017 19:59:04.459088  586929 node_conditions.go:105] duration metric: took 2.905092ms to run NodePressure ...
	I1017 19:59:04.459207  586929 start.go:241] waiting for startup goroutines ...
	I1017 19:59:04.738432  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:04.812946  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:04.813080  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:04.856012  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:05.238766  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:05.313036  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:05.313397  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:05.355844  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:05.738619  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:05.812358  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:05.812702  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:05.855535  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:06.238660  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:06.314138  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:06.314610  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:06.356685  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:06.737695  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:06.812587  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:06.814506  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:06.855436  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:07.238320  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:07.312722  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:07.312893  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:07.356122  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:07.739014  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:07.839475  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:07.839735  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:07.855834  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:08.238638  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:08.312107  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:08.312583  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:08.355935  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:08.738381  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:08.812430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:08.812579  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:08.855380  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:09.237548  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:09.312371  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:09.312473  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:09.355146  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:09.372474  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:09.739069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:09.838569  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:09.838690  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:09.939289  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:10.238058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:10.314552  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:10.315002  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:10.385258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:10.419186  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.046627359s)
	W1017 19:59:10.419217  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:10.419238  586929 retry.go:31] will retry after 11.526262814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:10.738147  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:10.812893  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:10.813005  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:10.855716  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:11.238364  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:11.313066  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:11.313154  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:11.356638  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:11.741318  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:11.814098  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:11.817835  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:11.856147  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:12.238981  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:12.313581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:12.313772  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:12.356280  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:12.738524  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:12.818606  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:12.820162  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:12.856214  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:13.239792  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:13.313411  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:13.313764  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:13.357816  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:13.739069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:13.828407  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:13.828867  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:13.857584  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:14.238452  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:14.313233  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:14.313547  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:14.355556  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:14.738132  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:14.812278  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:14.812439  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:14.855583  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:15.238306  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:15.312884  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:15.312995  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:15.356431  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:15.738894  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:15.813025  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:15.813149  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:15.856040  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:16.238613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:16.312590  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:16.313259  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:16.356345  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:16.738477  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:16.812449  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:16.812621  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:16.855331  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:17.237754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:17.312574  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:17.312789  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:17.355687  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:17.739248  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:17.814314  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:17.814926  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:17.856880  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:18.238419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:18.313618  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:18.315474  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:18.368124  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:18.739648  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:18.812968  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:18.813327  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:18.856201  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:19.238779  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:19.312921  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:19.313253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:19.356616  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:19.738890  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:19.812534  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:19.813240  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:19.855711  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:20.238529  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:20.312248  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:20.312814  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:20.355442  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:20.738266  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:20.812558  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:20.812820  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:20.856099  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.239419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:21.313384  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:21.313538  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:21.358372  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.741579  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:21.816807  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:21.817232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:21.856786  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:21.946157  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:22.239235  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:22.314661  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:22.316239  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:22.363077  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:22.739177  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:22.814087  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:22.814924  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:22.856090  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:23.238639  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:23.313775  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:23.315073  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:23.350027  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.403726603s)
	W1017 19:59:23.350105  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:23.350140  586929 retry.go:31] will retry after 31.29214394s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:23.356561  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:23.739887  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:23.813521  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:23.813856  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:23.856380  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:24.238550  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:24.312508  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:24.313135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:24.356071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:24.738067  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:24.812498  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:24.813232  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:24.855858  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:25.238640  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:25.312249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:25.312444  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:25.355312  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:25.738602  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:25.812525  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:25.813218  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:25.856318  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:26.238268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:26.312662  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:26.312988  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:26.356586  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:26.738253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:26.812926  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:26.813041  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:26.865582  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:27.238253  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:27.313773  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:27.313904  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:27.355981  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:27.739029  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:27.813102  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:27.813236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:27.856197  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:28.240687  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:28.339246  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:28.339837  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:28.355212  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:28.739018  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:28.814191  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:28.814522  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:28.855488  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:29.238220  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:29.312544  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:29.313135  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:29.356124  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:29.738990  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:29.812218  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:29.818461  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:29.855419  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:30.239292  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:30.312933  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:30.314117  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:30.355872  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:30.739058  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:30.812631  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:30.812820  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:30.855934  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:31.238696  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:31.312660  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:31.312904  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:31.355973  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:31.739226  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:31.814057  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:31.814220  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:31.856206  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:32.252136  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:32.313228  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:32.313367  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:32.355822  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:32.738732  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:32.813418  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:32.814072  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:32.856060  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:33.240524  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:33.314426  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:33.314975  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:33.356877  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:33.767467  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:33.812950  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:33.813168  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:33.856227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:34.238995  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:34.340261  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:34.340409  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:34.440305  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:34.739479  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:34.812843  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:34.813215  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:34.856399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:35.239154  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:35.318083  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:35.318426  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:35.356454  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:35.738758  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:35.814268  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:35.814903  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:35.856384  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:36.238227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:36.314350  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:36.314780  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:36.356237  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:36.737509  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:36.812389  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:36.812733  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:36.855815  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:37.240596  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:37.314540  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:37.314970  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:37.356415  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:37.738405  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:37.812751  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:37.813486  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:37.856012  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:38.238822  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:38.313446  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:38.313881  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:38.356522  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:38.738703  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:38.811767  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:38.812413  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:38.855884  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:39.238671  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:39.312436  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:39.312704  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:39.355319  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:39.738490  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:39.813123  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:39.813316  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:39.855588  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:40.237755  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:40.312768  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:40.312903  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:40.355865  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:40.738491  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:40.812718  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:40.814079  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:40.856548  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:41.238615  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:41.313320  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:41.313660  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:41.356226  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:41.739496  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:41.812776  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:41.812823  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:41.855727  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:42.239010  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:42.314278  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:42.314861  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:42.355905  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:42.738267  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:42.812270  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:42.814073  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:42.856227  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:43.238430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:43.313649  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:43.314143  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:43.413158  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:43.740339  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:43.840944  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:43.841096  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:43.856280  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:44.239095  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:44.312549  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:44.312726  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:44.356415  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:44.739467  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:44.839866  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:44.840046  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:44.856168  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:45.241190  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:45.320286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:45.319960  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:45.357049  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:45.738195  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:45.813025  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:45.813182  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:45.860286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:46.238747  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:46.312669  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:46.313353  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:46.356271  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:46.737988  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:46.813863  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:46.814260  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:46.856069  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:47.241236  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:47.341754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:47.341862  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:47.355581  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:47.739182  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:47.840191  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:47.840329  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:47.856547  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:48.238027  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:48.312048  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:48.312551  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:48.355062  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:48.738430  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:48.812531  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:48.812948  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:48.855887  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:49.239856  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:49.314144  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:49.314277  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:49.356672  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:49.739664  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:49.841587  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:49.841967  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:49.856304  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:50.238020  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:50.312150  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:50.312349  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:50.355854  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:50.738567  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:50.812356  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:50.813240  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:50.856085  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:51.239005  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:51.313548  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:51.313998  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1017 19:59:51.414558  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:51.738677  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:51.817287  586929 kapi.go:107] duration metric: took 1m23.509571343s to wait for kubernetes.io/minikube-addons=registry ...
	I1017 19:59:51.838722  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:51.855499  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:52.238155  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:52.311993  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:52.355785  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:52.738519  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:52.812520  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:52.856286  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:53.238379  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:53.312561  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:53.355277  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:53.740731  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:53.812011  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:53.856185  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:54.238790  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:54.312102  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:54.356139  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:54.642602  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1017 19:59:54.738545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:54.812882  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:54.856326  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.238192  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:55.312615  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:55.355798  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.739399  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:55.812019  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:55.856194  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:55.925878  586929 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.283237896s)
	W1017 19:59:55.925938  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:55.925964  586929 retry.go:31] will retry after 22.626956912s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1017 19:59:56.239237  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:56.312193  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:56.356298  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:56.738665  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:56.811844  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:56.855998  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:57.239221  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:57.312370  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:57.355374  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:57.738613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:57.813008  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:57.856928  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:58.239551  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:58.314180  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:58.357709  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:58.748350  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:58.813096  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:58.856405  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:59.240337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:59.316716  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:59.357201  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 19:59:59.810106  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 19:59:59.812462  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 19:59:59.856681  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:00.266746  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:00.334094  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:00.386071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:00.801249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:00.844017  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:00.891883  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:01.251666  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:01.340855  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:01.400754  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:01.755851  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:01.817865  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:01.905611  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:02.263029  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:02.323975  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:02.379391  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:02.739514  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:02.814725  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:02.856490  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:03.238401  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:03.312925  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:03.356249  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:03.738595  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:03.812312  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:03.856545  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:04.239931  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:04.312578  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:04.355551  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:04.738649  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:04.812381  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:04.855321  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:05.239445  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:05.313391  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:05.356388  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:05.738906  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:05.813280  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:05.856287  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:06.238546  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:06.311859  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:06.355953  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:06.739257  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:06.839251  586929 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1017 20:00:06.858832  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:07.238629  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:07.312824  586929 kapi.go:107] duration metric: took 1m39.004396914s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1017 20:00:07.356337  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:07.738270  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:07.855695  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:08.244912  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:08.356241  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:08.739196  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:08.859190  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:09.238770  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:09.355401  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:09.740008  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:09.856613  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:10.239097  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:10.356665  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:10.739369  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:10.856071  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:11.239747  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:11.356258  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:11.738620  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:11.856403  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1017 20:00:12.238133  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:12.355286  586929 kapi.go:107] duration metric: took 1m40.50288042s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1017 20:00:12.358582  586929 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-948763 cluster.
	I1017 20:00:12.361615  586929 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1017 20:00:12.364596  586929 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1017 20:00:12.738616  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:13.240736  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:13.739021  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:14.238382  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:14.738557  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:15.238421  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:15.739714  586929 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1017 20:00:16.238638  586929 kapi.go:107] duration metric: took 1m47.504028468s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1017 20:00:18.553196  586929 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1017 20:00:19.443501  586929 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1017 20:00:19.443598  586929 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1017 20:00:19.446819  586929 out.go:179] * Enabled addons: storage-provisioner, ingress-dns, nvidia-device-plugin, cloud-spanner, registry-creds, default-storageclass, amd-gpu-device-plugin, storage-provisioner-rancher, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1017 20:00:19.449781  586929 addons.go:514] duration metric: took 1m57.507210179s for enable addons: enabled=[storage-provisioner ingress-dns nvidia-device-plugin cloud-spanner registry-creds default-storageclass amd-gpu-device-plugin storage-provisioner-rancher metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1017 20:00:19.449834  586929 start.go:246] waiting for cluster config update ...
	I1017 20:00:19.449857  586929 start.go:255] writing updated cluster config ...
	I1017 20:00:19.450170  586929 ssh_runner.go:195] Run: rm -f paused
	I1017 20:00:19.453859  586929 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:19.457994  586929 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-f4b6j" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.464654  586929 pod_ready.go:94] pod "coredns-66bc5c9577-f4b6j" is "Ready"
	I1017 20:00:19.464687  586929 pod_ready.go:86] duration metric: took 6.662003ms for pod "coredns-66bc5c9577-f4b6j" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.467194  586929 pod_ready.go:83] waiting for pod "etcd-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.472647  586929 pod_ready.go:94] pod "etcd-addons-948763" is "Ready"
	I1017 20:00:19.472672  586929 pod_ready.go:86] duration metric: took 5.443417ms for pod "etcd-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.475382  586929 pod_ready.go:83] waiting for pod "kube-apiserver-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.480614  586929 pod_ready.go:94] pod "kube-apiserver-addons-948763" is "Ready"
	I1017 20:00:19.480642  586929 pod_ready.go:86] duration metric: took 5.232633ms for pod "kube-apiserver-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.483234  586929 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:19.857712  586929 pod_ready.go:94] pod "kube-controller-manager-addons-948763" is "Ready"
	I1017 20:00:19.857743  586929 pod_ready.go:86] duration metric: took 374.434418ms for pod "kube-controller-manager-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.058614  586929 pod_ready.go:83] waiting for pod "kube-proxy-qtcs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.457767  586929 pod_ready.go:94] pod "kube-proxy-qtcs2" is "Ready"
	I1017 20:00:20.457796  586929 pod_ready.go:86] duration metric: took 399.155156ms for pod "kube-proxy-qtcs2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:20.661696  586929 pod_ready.go:83] waiting for pod "kube-scheduler-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:21.057956  586929 pod_ready.go:94] pod "kube-scheduler-addons-948763" is "Ready"
	I1017 20:00:21.057981  586929 pod_ready.go:86] duration metric: took 396.213255ms for pod "kube-scheduler-addons-948763" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:00:21.057993  586929 pod_ready.go:40] duration metric: took 1.604099186s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:00:21.114830  586929 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:00:21.118132  586929 out.go:179] * Done! kubectl is now configured to use "addons-948763" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:00:14 addons-948763 crio[833]: time="2025-10-17T20:00:14.963559779Z" level=info msg="Created container 62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6: kube-system/csi-hostpathplugin-7b6l4/csi-snapshotter" id=6d4e5ee0-ddbb-4701-b342-5d73445b83df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:14 addons-948763 crio[833]: time="2025-10-17T20:00:14.966836256Z" level=info msg="Starting container: 62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6" id=6a3224de-6f5a-4678-be1c-a3a7fbf4f88e name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:00:14 addons-948763 crio[833]: time="2025-10-17T20:00:14.969890823Z" level=info msg="Started container" PID=4912 containerID=62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6 description=kube-system/csi-hostpathplugin-7b6l4/csi-snapshotter id=6a3224de-6f5a-4678-be1c-a3a7fbf4f88e name=/runtime.v1.RuntimeService/StartContainer sandboxID=1516bbdf8e66b8040f94ddc5821a69f00fe47181533153e85a09c36d682e447a
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.537698145Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3984eabe-3055-4e92-b0d1-af0b054c3cc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.537782339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.544458857Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a UID:56a82d5e-5c40-41d6-a49a-ad08eaba86bc NetNS:/var/run/netns/43b1b63e-483e-49b6-a402-3240499e4923 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cd88}] Aliases:map[]}"
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.54449391Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.557747235Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a UID:56a82d5e-5c40-41d6-a49a-ad08eaba86bc NetNS:/var/run/netns/43b1b63e-483e-49b6-a402-3240499e4923 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cd88}] Aliases:map[]}"
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.557959874Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.561617969Z" level=info msg="Ran pod sandbox d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a with infra container: default/busybox/POD" id=3984eabe-3055-4e92-b0d1-af0b054c3cc3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.562662931Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ea391a6e-4f0e-498c-af45-1834e9ca6197 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.56279383Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ea391a6e-4f0e-498c-af45-1834e9ca6197 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.562839943Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ea391a6e-4f0e-498c-af45-1834e9ca6197 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.565558505Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82ba1d46-38ed-4954-9787-48d572dc6c11 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:00:22 addons-948763 crio[833]: time="2025-10-17T20:00:22.569462429Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.541818799Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=82ba1d46-38ed-4954-9787-48d572dc6c11 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.542455181Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8eed99a7-08c8-418c-b6d4-09dd4e3e7c25 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.545588846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=06c51ef4-e54c-465f-ba7d-e07544e4c483 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.552603059Z" level=info msg="Creating container: default/busybox/busybox" id=774aca31-d1a8-4f45-bf57-9990758c9d84 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.553394865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.559966183Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.560479848Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.582814639Z" level=info msg="Created container 0c243b948deeaf840fd82617992c5b84e5df83d587517982afcbc93b2c83f1cc: default/busybox/busybox" id=774aca31-d1a8-4f45-bf57-9990758c9d84 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.586186569Z" level=info msg="Starting container: 0c243b948deeaf840fd82617992c5b84e5df83d587517982afcbc93b2c83f1cc" id=57670e26-cdbb-4408-aa2b-62132f063d69 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:00:24 addons-948763 crio[833]: time="2025-10-17T20:00:24.592357914Z" level=info msg="Started container" PID=5030 containerID=0c243b948deeaf840fd82617992c5b84e5df83d587517982afcbc93b2c83f1cc description=default/busybox/busybox id=57670e26-cdbb-4408-aa2b-62132f063d69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	0c243b948deea       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   d9a6bc325601e       busybox                                     default
	62f5238d05d6d       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          18 seconds ago       Running             csi-snapshotter                          0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	d36d10bdfcbed       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          19 seconds ago       Running             csi-provisioner                          0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	c53a40a0c62b8       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            21 seconds ago       Running             liveness-probe                           0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	d595a7efff522       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 22 seconds ago       Running             gcp-auth                                 0                   bfefd79051e1f       gcp-auth-78565c9fb4-k9ct2                   gcp-auth
	bcc361491949c       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           25 seconds ago       Running             hostpath                                 0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	ca753fcea18a0       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             26 seconds ago       Running             controller                               0                   dfc9ff6468ac1       ingress-nginx-controller-675c5ddd98-xc8ch   ingress-nginx
	8aa75ff35c2a2       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            35 seconds ago       Running             gadget                                   0                   940634f7b50cf       gadget-dd22p                                gadget
	ef912ed7d00d7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                39 seconds ago       Running             node-driver-registrar                    0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	b9d12f5fbfa6c       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              40 seconds ago       Running             csi-resizer                              0                   b719d74efe38f       csi-hostpath-resizer-0                      kube-system
	9bc6a4da19b29       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              42 seconds ago       Running             registry-proxy                           0                   dee4186013164       registry-proxy-5jjqn                        kube-system
	840a4de138b52       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           46 seconds ago       Running             registry                                 0                   09377d604db1a       registry-6b586f9694-pj8zh                   kube-system
	538518d7efa71       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             46 seconds ago       Exited              patch                                    2                   7a8fee96c714e       gcp-auth-certs-patch-zkg2l                  gcp-auth
	ff6d474d02691       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   48 seconds ago       Exited              patch                                    0                   7b781615d9962       ingress-nginx-admission-patch-kvpgp         ingress-nginx
	456b95b02f2ac       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   48 seconds ago       Exited              create                                   0                   f0daa2588818b       gcp-auth-certs-create-snsnr                 gcp-auth
	ec5f8e2b041c5       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             48 seconds ago       Running             local-path-provisioner                   0                   373a7efef9a5e       local-path-provisioner-648f6765c9-6fbwx     local-path-storage
	63ee152cb5bc0       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      49 seconds ago       Running             volume-snapshot-controller               0                   eb43cc5255279       snapshot-controller-7d9fbc56b8-pp66v        kube-system
	16b93abda1117       nvcr.io/nvidia/k8s-device-plugin@sha256:206d989142113ab71eaf27958a0e0a203f40103cf5b48890f5de80fd1b3fcfde                                     50 seconds ago       Running             nvidia-device-plugin-ctr                 0                   b1914199607b0       nvidia-device-plugin-daemonset-7vw8v        kube-system
	44c18bb7ee583       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   e13cb9e72a373       snapshot-controller-7d9fbc56b8-89gvs        kube-system
	21d5cbb832a96       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   1516bbdf8e66b       csi-hostpathplugin-7b6l4                    kube-system
	65f41a91bc9c3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   About a minute ago   Exited              create                                   0                   61c4a778a4f3a       ingress-nginx-admission-create-z4cxf        ingress-nginx
	eb8c078b44245       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   4f400b83a557b       yakd-dashboard-5ff678cb9-rg2kq              yakd-dashboard
	64a714f55a7cc       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   7844c7f9e5a41       kube-ingress-dns-minikube                   kube-system
	43e9813ed2b67       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   170736af7f9fc       metrics-server-85b7d694d7-h9xx7             kube-system
	f09c7137954ed       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               About a minute ago   Running             cloud-spanner-emulator                   0                   e2a5618919ceb       cloud-spanner-emulator-86bd5cbb97-mhdbb     default
	6573cd1e55f00       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   63473a0ba4a1d       csi-hostpath-attacher-0                     kube-system
	5a104d2bf0866       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   78354a824348c       storage-provisioner                         kube-system
	ae878718d77b6       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   6e8efb9135264       coredns-66bc5c9577-f4b6j                    kube-system
	c2aef7690fa71       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   c4dbec0c991dc       kindnet-kr7qd                               kube-system
	1d1813b82c050       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   7dca2e12231ba       kube-proxy-qtcs2                            kube-system
	db1d8cdc9b83a       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   1283835cc9a0d       kube-scheduler-addons-948763                kube-system
	e643c6e152656       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   64e630e41e389       kube-apiserver-addons-948763                kube-system
	fc4fe4ea2862e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   7be534eaf84aa       etcd-addons-948763                          kube-system
	cf9507fdd5ef1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   18ec12507bef3       kube-controller-manager-addons-948763       kube-system
	
	
	==> coredns [ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220] <==
	[INFO] 10.244.0.16:56351 - 13937 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000076284s
	[INFO] 10.244.0.16:56351 - 7106 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002129483s
	[INFO] 10.244.0.16:56351 - 47886 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001711237s
	[INFO] 10.244.0.16:56351 - 45493 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000114193s
	[INFO] 10.244.0.16:56351 - 31310 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000082701s
	[INFO] 10.244.0.16:40334 - 65249 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000168962s
	[INFO] 10.244.0.16:40334 - 64788 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000097667s
	[INFO] 10.244.0.16:41485 - 6197 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000095001s
	[INFO] 10.244.0.16:41485 - 5975 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000073412s
	[INFO] 10.244.0.16:46475 - 39076 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099997s
	[INFO] 10.244.0.16:46475 - 38896 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000087049s
	[INFO] 10.244.0.16:55022 - 26664 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001622752s
	[INFO] 10.244.0.16:55022 - 26483 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001362128s
	[INFO] 10.244.0.16:39227 - 48049 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104707s
	[INFO] 10.244.0.16:39227 - 47637 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000148048s
	[INFO] 10.244.0.21:49388 - 27648 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000159469s
	[INFO] 10.244.0.21:49791 - 7471 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000080428s
	[INFO] 10.244.0.21:55361 - 2130 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000145487s
	[INFO] 10.244.0.21:33493 - 4883 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000104789s
	[INFO] 10.244.0.21:55009 - 53337 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000093031s
	[INFO] 10.244.0.21:42170 - 64721 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000062146s
	[INFO] 10.244.0.21:41291 - 33418 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002650376s
	[INFO] 10.244.0.21:56465 - 20604 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002579466s
	[INFO] 10.244.0.21:34241 - 21393 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001646078s
	[INFO] 10.244.0.21:60921 - 21298 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.006457084s
	
	
	==> describe nodes <==
	Name:               addons-948763
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-948763
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=addons-948763
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T19_58_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-948763
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-948763"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 19:58:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-948763
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:00:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:00:30 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:00:30 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:00:30 +0000   Fri, 17 Oct 2025 19:58:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:00:30 +0000   Fri, 17 Oct 2025 19:59:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-948763
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                559e7844-22fd-4610-b77e-f56f5f74096c
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-mhdbb      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  gadget                      gadget-dd22p                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  gcp-auth                    gcp-auth-78565c9fb4-k9ct2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-xc8ch    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m5s
	  kube-system                 coredns-66bc5c9577-f4b6j                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m12s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 csi-hostpathplugin-7b6l4                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 etcd-addons-948763                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m17s
	  kube-system                 kindnet-kr7qd                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m12s
	  kube-system                 kube-apiserver-addons-948763                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-controller-manager-addons-948763        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-proxy-qtcs2                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-scheduler-addons-948763                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 metrics-server-85b7d694d7-h9xx7              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m7s
	  kube-system                 nvidia-device-plugin-daemonset-7vw8v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-6b586f9694-pj8zh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 registry-creds-764b6fb674-w5f6g              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 registry-proxy-5jjqn                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 snapshot-controller-7d9fbc56b8-89gvs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 snapshot-controller-7d9fbc56b8-pp66v         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m7s
	  local-path-storage          local-path-provisioner-648f6765c9-6fbwx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-rg2kq               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age    From             Message
	  ----     ------                   ----   ----             -------
	  Normal   Starting                 2m11s  kube-proxy       
	  Normal   Starting                 2m17s  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m17s  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m17s  kubelet          Node addons-948763 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m17s  kubelet          Node addons-948763 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m17s  kubelet          Node addons-948763 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m13s  node-controller  Node addons-948763 event: Registered Node addons-948763 in Controller
	  Normal   NodeReady                91s    kubelet          Node addons-948763 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2] <==
	{"level":"warn","ts":"2025-10-17T19:58:11.856505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.888353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.907528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.949185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.981071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:11.997654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.028150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.052057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.090707Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.105162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.140033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.156172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.183344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.227456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.248388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.275733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.311713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.339240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:12.497337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:28.863635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:28.882234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.546650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.563923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.614510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T19:58:50.629734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38892","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [d595a7efff522120ff8e5dafc7c1075abd3e649ad90a9ffd29fd55e41beacfed] <==
	2025/10/17 20:00:11 GCP Auth Webhook started!
	2025/10/17 20:00:21 Ready to marshal response ...
	2025/10/17 20:00:21 Ready to write response ...
	2025/10/17 20:00:22 Ready to marshal response ...
	2025/10/17 20:00:22 Ready to write response ...
	2025/10/17 20:00:22 Ready to marshal response ...
	2025/10/17 20:00:22 Ready to write response ...
	
	
	==> kernel <==
	 20:00:33 up  2:42,  0 user,  load average: 3.53, 3.73, 3.52
	Linux addons-948763 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883] <==
	E1017 19:58:52.121567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 19:58:52.121641       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 19:58:53.819962       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 19:58:53.819989       1 metrics.go:72] Registering metrics
	I1017 19:58:53.820064       1 controller.go:711] "Syncing nftables rules"
	I1017 19:59:02.121296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:02.121371       1 main.go:301] handling current node
	I1017 19:59:12.124441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:12.124486       1 main.go:301] handling current node
	I1017 19:59:22.120470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:22.120498       1 main.go:301] handling current node
	I1017 19:59:32.120871       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:32.120908       1 main.go:301] handling current node
	I1017 19:59:42.121173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:42.121215       1 main.go:301] handling current node
	I1017 19:59:52.120488       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 19:59:52.120530       1 main.go:301] handling current node
	I1017 20:00:02.120979       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:00:02.121045       1 main.go:301] handling current node
	I1017 20:00:12.120801       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:00:12.120836       1 main.go:301] handling current node
	I1017 20:00:22.120435       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:00:22.120549       1 main.go:301] handling current node
	I1017 20:00:32.121030       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:00:32.121059       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592] <==
	W1017 19:58:50.560730       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 19:58:50.613802       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 19:58:50.628528       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 19:59:02.276270       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.110.121:443: connect: connection refused
	E1017 19:59:02.276334       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:02.277165       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.110.121:443: connect: connection refused
	E1017 19:59:02.277201       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:02.329580       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.110.121:443: connect: connection refused
	E1017 19:59:02.329624       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.110.121:443: connect: connection refused" logger="UnhandledError"
	W1017 19:59:13.455511       1 handler_proxy.go:99] no RequestInfo found in the context
	E1017 19:59:13.455596       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1017 19:59:13.459848       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.460536       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.465760       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.487344       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.529066       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.610155       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	E1017 19:59:13.688429       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.248.186:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.248.186:443: connect: connection refused" logger="UnhandledError"
	I1017 19:59:13.908783       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1017 20:00:31.454184       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55892: use of closed network connection
	E1017 20:00:31.683446       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55924: use of closed network connection
	E1017 20:00:31.813148       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55944: use of closed network connection
	
	
	==> kube-controller-manager [cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7] <==
	I1017 19:58:20.579447       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:58:20.579550       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 19:58:20.579587       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 19:58:20.579689       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 19:58:20.581166       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 19:58:20.581274       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 19:58:20.582562       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 19:58:20.582706       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 19:58:20.583184       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 19:58:20.583221       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 19:58:20.583192       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 19:58:20.583271       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 19:58:20.587087       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 19:58:20.589517       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 19:58:20.593046       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 19:58:20.594796       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1017 19:58:26.858304       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1017 19:58:50.539615       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1017 19:58:50.539780       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1017 19:58:50.539833       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1017 19:58:50.602473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1017 19:58:50.606863       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1017 19:58:50.640951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 19:58:50.707551       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 19:59:05.542589       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b] <==
	I1017 19:58:21.877711       1 server_linux.go:53] "Using iptables proxy"
	I1017 19:58:21.968718       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 19:58:22.069803       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 19:58:22.069853       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 19:58:22.069933       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 19:58:22.174457       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 19:58:22.174568       1 server_linux.go:132] "Using iptables Proxier"
	I1017 19:58:22.187346       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 19:58:22.187723       1 server.go:527] "Version info" version="v1.34.1"
	I1017 19:58:22.187738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:58:22.206810       1 config.go:200] "Starting service config controller"
	I1017 19:58:22.206831       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 19:58:22.206854       1 config.go:106] "Starting endpoint slice config controller"
	I1017 19:58:22.206858       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 19:58:22.206868       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 19:58:22.206872       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 19:58:22.223668       1 config.go:309] "Starting node config controller"
	I1017 19:58:22.223690       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 19:58:22.223706       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 19:58:22.309025       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 19:58:22.309061       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 19:58:22.309346       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7] <==
	I1017 19:58:14.769751       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 19:58:14.773074       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 19:58:14.773241       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:58:14.773350       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 19:58:14.773411       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 19:58:14.783841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 19:58:14.783947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 19:58:14.784000       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 19:58:14.784054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 19:58:14.784099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 19:58:14.784145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 19:58:14.784180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 19:58:14.784274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 19:58:14.787654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 19:58:14.787731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 19:58:14.788082       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 19:58:14.788198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 19:58:14.788336       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 19:58:14.788383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 19:58:14.788453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 19:58:14.788682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 19:58:14.788789       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 19:58:14.788868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 19:58:14.788915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1017 19:58:15.673948       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 19:59:48 addons-948763 kubelet[1282]: I1017 19:59:48.614272    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-pj8zh" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:59:48 addons-948763 kubelet[1282]: I1017 19:59:48.780234    1282 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2vdh\" (UniqueName: \"kubernetes.io/projected/bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3-kube-api-access-c2vdh\") pod \"bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3\" (UID: \"bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3\") "
	Oct 17 19:59:48 addons-948763 kubelet[1282]: I1017 19:59:48.787567    1282 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3-kube-api-access-c2vdh" (OuterVolumeSpecName: "kube-api-access-c2vdh") pod "bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3" (UID: "bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3"). InnerVolumeSpecName "kube-api-access-c2vdh". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 17 19:59:48 addons-948763 kubelet[1282]: I1017 19:59:48.881752    1282 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2vdh\" (UniqueName: \"kubernetes.io/projected/bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3-kube-api-access-c2vdh\") on node \"addons-948763\" DevicePath \"\""
	Oct 17 19:59:49 addons-948763 kubelet[1282]: I1017 19:59:49.620296    1282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8fee96c714e5ccbf84282c20184780b03e8d9de5d3e0374edc81742d5784f7"
	Oct 17 19:59:51 addons-948763 kubelet[1282]: I1017 19:59:51.628270    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5jjqn" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:59:52 addons-948763 kubelet[1282]: I1017 19:59:52.638945    1282 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-5jjqn" secret="" err="secret \"gcp-auth\" not found"
	Oct 17 19:59:53 addons-948763 kubelet[1282]: I1017 19:59:53.662336    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-5jjqn" podStartSLOduration=3.73369826 podStartE2EDuration="51.662316296s" podCreationTimestamp="2025-10-17 19:59:02 +0000 UTC" firstStartedPulling="2025-10-17 19:59:03.350175809 +0000 UTC m=+46.788620348" lastFinishedPulling="2025-10-17 19:59:51.278793844 +0000 UTC m=+94.717238384" observedRunningTime="2025-10-17 19:59:51.645623028 +0000 UTC m=+95.084067576" watchObservedRunningTime="2025-10-17 19:59:53.662316296 +0000 UTC m=+97.100760844"
	Oct 17 19:59:58 addons-948763 kubelet[1282]: I1017 19:59:58.755636    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpath-resizer-0" podStartSLOduration=41.563648727 podStartE2EDuration="1m30.755619011s" podCreationTimestamp="2025-10-17 19:58:28 +0000 UTC" firstStartedPulling="2025-10-17 19:59:03.379969942 +0000 UTC m=+46.818414490" lastFinishedPulling="2025-10-17 19:59:52.571940234 +0000 UTC m=+96.010384774" observedRunningTime="2025-10-17 19:59:53.66277867 +0000 UTC m=+97.101223210" watchObservedRunningTime="2025-10-17 19:59:58.755619011 +0000 UTC m=+102.194063559"
	Oct 17 19:59:58 addons-948763 kubelet[1282]: I1017 19:59:58.756619    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-dd22p" podStartSLOduration=65.850975984 podStartE2EDuration="1m32.756607686s" podCreationTimestamp="2025-10-17 19:58:26 +0000 UTC" firstStartedPulling="2025-10-17 19:59:30.72091699 +0000 UTC m=+74.159361530" lastFinishedPulling="2025-10-17 19:59:57.626548693 +0000 UTC m=+101.064993232" observedRunningTime="2025-10-17 19:59:58.754711085 +0000 UTC m=+102.193155624" watchObservedRunningTime="2025-10-17 19:59:58.756607686 +0000 UTC m=+102.195052234"
	Oct 17 20:00:06 addons-948763 kubelet[1282]: E1017 20:00:06.179412    1282 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 17 20:00:06 addons-948763 kubelet[1282]: E1017 20:00:06.179495    1282 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e0294cdc-575e-4386-8522-945e51a2b371-gcr-creds podName:e0294cdc-575e-4386-8522-945e51a2b371 nodeName:}" failed. No retries permitted until 2025-10-17 20:01:10.179477484 +0000 UTC m=+173.617922024 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/e0294cdc-575e-4386-8522-945e51a2b371-gcr-creds") pod "registry-creds-764b6fb674-w5f6g" (UID: "e0294cdc-575e-4386-8522-945e51a2b371") : secret "registry-creds-gcr" not found
	Oct 17 20:00:06 addons-948763 kubelet[1282]: W1017 20:00:06.661806    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/crio-bfefd79051e1fa73dd72d480c07889fa6aa1987f38da2a99132483a428844784 WatchSource:0}: Error finding container bfefd79051e1fa73dd72d480c07889fa6aa1987f38da2a99132483a428844784: Status 404 returned error can't find the container with id bfefd79051e1fa73dd72d480c07889fa6aa1987f38da2a99132483a428844784
	Oct 17 20:00:08 addons-948763 kubelet[1282]: I1017 20:00:08.899168    1282 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 17 20:00:08 addons-948763 kubelet[1282]: I1017 20:00:08.899223    1282 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 17 20:00:11 addons-948763 kubelet[1282]: I1017 20:00:11.944565    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-xc8ch" podStartSLOduration=71.530344682 podStartE2EDuration="1m43.944536199s" podCreationTimestamp="2025-10-17 19:58:28 +0000 UTC" firstStartedPulling="2025-10-17 19:59:34.332792302 +0000 UTC m=+77.771236842" lastFinishedPulling="2025-10-17 20:00:06.746983819 +0000 UTC m=+110.185428359" observedRunningTime="2025-10-17 20:00:06.882024866 +0000 UTC m=+110.320469414" watchObservedRunningTime="2025-10-17 20:00:11.944536199 +0000 UTC m=+115.382980747"
	Oct 17 20:00:11 addons-948763 kubelet[1282]: I1017 20:00:11.946095    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-k9ct2" podStartSLOduration=96.452717278 podStartE2EDuration="1m40.946070891s" podCreationTimestamp="2025-10-17 19:58:31 +0000 UTC" firstStartedPulling="2025-10-17 20:00:06.665553687 +0000 UTC m=+110.103998227" lastFinishedPulling="2025-10-17 20:00:11.1589073 +0000 UTC m=+114.597351840" observedRunningTime="2025-10-17 20:00:11.943315619 +0000 UTC m=+115.381760200" watchObservedRunningTime="2025-10-17 20:00:11.946070891 +0000 UTC m=+115.384515431"
	Oct 17 20:00:18 addons-948763 kubelet[1282]: I1017 20:00:18.046967    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-7b6l4" podStartSLOduration=4.333740108 podStartE2EDuration="1m16.046939491s" podCreationTimestamp="2025-10-17 19:59:02 +0000 UTC" firstStartedPulling="2025-10-17 19:59:03.212811635 +0000 UTC m=+46.651256175" lastFinishedPulling="2025-10-17 20:00:14.92601101 +0000 UTC m=+118.364455558" observedRunningTime="2025-10-17 20:00:15.968103546 +0000 UTC m=+119.406548102" watchObservedRunningTime="2025-10-17 20:00:18.046939491 +0000 UTC m=+121.485384039"
	Oct 17 20:00:18 addons-948763 kubelet[1282]: I1017 20:00:18.687951    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43849fc7-fb42-4393-9d85-a8c9aeee4156" path="/var/lib/kubelet/pods/43849fc7-fb42-4393-9d85-a8c9aeee4156/volumes"
	Oct 17 20:00:20 addons-948763 kubelet[1282]: I1017 20:00:20.687677    1282 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3" path="/var/lib/kubelet/pods/bbf2afb1-a16d-43a1-8ca5-96c16bffe0d3/volumes"
	Oct 17 20:00:22 addons-948763 kubelet[1282]: I1017 20:00:22.346324    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sg2j\" (UniqueName: \"kubernetes.io/projected/56a82d5e-5c40-41d6-a49a-ad08eaba86bc-kube-api-access-9sg2j\") pod \"busybox\" (UID: \"56a82d5e-5c40-41d6-a49a-ad08eaba86bc\") " pod="default/busybox"
	Oct 17 20:00:22 addons-948763 kubelet[1282]: I1017 20:00:22.346905    1282 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/56a82d5e-5c40-41d6-a49a-ad08eaba86bc-gcp-creds\") pod \"busybox\" (UID: \"56a82d5e-5c40-41d6-a49a-ad08eaba86bc\") " pod="default/busybox"
	Oct 17 20:00:22 addons-948763 kubelet[1282]: W1017 20:00:22.559951    1282 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/5d47ee6e89dc55c27b394d19a6964ed45ff3a7fd214a44b90e5566691a3a6440/crio-d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a WatchSource:0}: Error finding container d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a: Status 404 returned error can't find the container with id d9a6bc325601eaece70d4663da9c3928751538c96dfeb7e4796c0d0c75fc890a
	Oct 17 20:00:25 addons-948763 kubelet[1282]: I1017 20:00:25.011184    1282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.030867159 podStartE2EDuration="3.011147579s" podCreationTimestamp="2025-10-17 20:00:22 +0000 UTC" firstStartedPulling="2025-10-17 20:00:22.563072234 +0000 UTC m=+126.001516774" lastFinishedPulling="2025-10-17 20:00:24.543352654 +0000 UTC m=+127.981797194" observedRunningTime="2025-10-17 20:00:25.007051594 +0000 UTC m=+128.445496142" watchObservedRunningTime="2025-10-17 20:00:25.011147579 +0000 UTC m=+128.449592143"
	Oct 17 20:00:31 addons-948763 kubelet[1282]: E1017 20:00:31.813501    1282 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57240->127.0.0.1:41155: write tcp 127.0.0.1:57240->127.0.0.1:41155: write: broken pipe
	
	
	==> storage-provisioner [5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a] <==
	W1017 20:00:09.964647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:11.990999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:12.023719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:14.031433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:14.037833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:16.041216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:16.046267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:18.050779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:18.061026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:20.064430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:20.069200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:22.072702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:22.082207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:24.086474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:24.091369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:26.094675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:26.101947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:28.105463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:28.109877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:30.115016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:30.120565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:32.123717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:32.132344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:34.138947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:00:34.147728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-948763 -n addons-948763
helpers_test.go:269: (dbg) Run:  kubectl --context addons-948763 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp registry-creds-764b6fb674-w5f6g
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp registry-creds-764b6fb674-w5f6g
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp registry-creds-764b6fb674-w5f6g: exit status 1 (87.566587ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-z4cxf" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-kvpgp" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-w5f6g" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-948763 describe pod ingress-nginx-admission-create-z4cxf ingress-nginx-admission-patch-kvpgp registry-creds-764b6fb674-w5f6g: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable headlamp --alsologtostderr -v=1: exit status 11 (275.363958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:00:34.996208  593554 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:00:34.997053  593554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:34.997064  593554 out.go:374] Setting ErrFile to fd 2...
	I1017 20:00:34.997070  593554 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:00:34.997322  593554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:00:34.997636  593554 mustload.go:65] Loading cluster: addons-948763
	I1017 20:00:34.997987  593554 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:34.998005  593554 addons.go:606] checking whether the cluster is paused
	I1017 20:00:34.998108  593554 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:00:34.998130  593554 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:00:34.998554  593554 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:00:35.020310  593554 ssh_runner.go:195] Run: systemctl --version
	I1017 20:00:35.020391  593554 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:00:35.042798  593554 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:00:35.150692  593554 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:00:35.150784  593554 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:00:35.184314  593554 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:00:35.184341  593554 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:00:35.184353  593554 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:00:35.184358  593554 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:00:35.184364  593554 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:00:35.184402  593554 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:00:35.184415  593554 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:00:35.184418  593554 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:00:35.184422  593554 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:00:35.184429  593554 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:00:35.184443  593554 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:00:35.184447  593554 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:00:35.184452  593554 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:00:35.184476  593554 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:00:35.184487  593554 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:00:35.184493  593554 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:00:35.184507  593554 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:00:35.184513  593554 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:00:35.184520  593554 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:00:35.184523  593554 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:00:35.184529  593554 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:00:35.184536  593554 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:00:35.184539  593554 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:00:35.184542  593554 cri.go:89] found id: ""
	I1017 20:00:35.184609  593554 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:00:35.201665  593554 out.go:203] 
	W1017 20:00:35.204791  593554 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:00:35Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:00:35.204824  593554 out.go:285] * 
	* 
	W1017 20:00:35.212248  593554 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:00:35.215731  593554 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.12s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-mhdbb" [073cac03-2ad6-4320-bcb8-ed97dc5922f8] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015531093s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (358.497934ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:45.184063  595430 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:45.185013  595430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:45.185044  595430 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:45.185056  595430 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:45.185439  595430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:45.185916  595430 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:45.186350  595430 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:45.186372  595430 addons.go:606] checking whether the cluster is paused
	I1017 20:01:45.186481  595430 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:45.186504  595430 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:45.187272  595430 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:45.211661  595430 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:45.211772  595430 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:45.233812  595430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:45.354837  595430 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:45.355244  595430 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:45.425444  595430 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:45.425544  595430 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:45.425565  595430 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:45.425604  595430 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:45.425626  595430 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:45.425644  595430 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:45.425662  595430 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:45.425695  595430 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:45.425717  595430 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:45.425750  595430 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:45.425783  595430 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:45.425806  595430 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:45.425824  595430 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:45.425843  595430 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:45.425871  595430 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:45.425899  595430 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:45.425926  595430 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:45.425970  595430 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:45.425994  595430 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:45.426013  595430 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:45.426049  595430 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:45.426073  595430 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:45.426096  595430 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:45.426134  595430 cri.go:89] found id: ""
	I1017 20:01:45.426310  595430 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:45.447811  595430 out.go:203] 
	W1017 20:01:45.451075  595430 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:45.451140  595430 out.go:285] * 
	* 
	W1017 20:01:45.458357  595430 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:45.461466  595430 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-948763 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-948763 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [4917448e-16ff-4c84-98d4-c58ffcd45ca2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [4917448e-16ff-4c84-98d4-c58ffcd45ca2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [4917448e-16ff-4c84-98d4-c58ffcd45ca2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003983319s
addons_test.go:967: (dbg) Run:  kubectl --context addons-948763 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 ssh "cat /opt/local-path-provisioner/pvc-75e5d985-dd82-4e2e-bc28-1cc76f6e0618_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-948763 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-948763 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (285.298723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:28.274347  595210 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:28.275238  595210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:28.275280  595210 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:28.275302  595210 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:28.275602  595210 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:28.275937  595210 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:28.276401  595210 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:28.276451  595210 addons.go:606] checking whether the cluster is paused
	I1017 20:01:28.276583  595210 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:28.276624  595210 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:28.277121  595210 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:28.308814  595210 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:28.308868  595210 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:28.331457  595210 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:28.434343  595210 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:28.434457  595210 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:28.472109  595210 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:28.472129  595210 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:28.472134  595210 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:28.472138  595210 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:28.472141  595210 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:28.472145  595210 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:28.472148  595210 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:28.472152  595210 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:28.472155  595210 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:28.472162  595210 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:28.472165  595210 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:28.472169  595210 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:28.472172  595210 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:28.472175  595210 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:28.472178  595210 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:28.472185  595210 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:28.472189  595210 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:28.472194  595210 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:28.472197  595210 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:28.472200  595210 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:28.472204  595210 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:28.472207  595210 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:28.472210  595210 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:28.472213  595210 cri.go:89] found id: ""
	I1017 20:01:28.472260  595210 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:28.493710  595210 out.go:203] 
	W1017 20:01:28.496751  595210 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:28.496779  595210 out.go:285] * 
	* 
	W1017 20:01:28.504676  595210 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:28.508122  595210 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-7vw8v" [fc2a0a25-79e2-40f6-af87-660887984563] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003720479s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (292.556776ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:39.846083  595370 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:39.846890  595370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:39.846903  595370 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:39.846907  595370 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:39.847224  595370 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:39.847525  595370 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:39.847939  595370 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:39.847957  595370 addons.go:606] checking whether the cluster is paused
	I1017 20:01:39.848061  595370 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:39.848080  595370 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:39.848530  595370 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:39.867212  595370 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:39.867269  595370 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:39.888787  595370 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:40.009905  595370 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:40.010041  595370 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:40.047708  595370 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:40.047791  595370 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:40.047803  595370 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:40.047808  595370 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:40.047812  595370 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:40.047816  595370 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:40.047820  595370 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:40.047823  595370 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:40.047827  595370 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:40.047833  595370 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:40.047837  595370 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:40.047840  595370 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:40.047844  595370 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:40.047852  595370 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:40.047856  595370 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:40.047876  595370 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:40.047883  595370 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:40.047889  595370 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:40.047892  595370 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:40.047895  595370 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:40.047900  595370 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:40.047903  595370 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:40.047907  595370 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:40.047910  595370 cri.go:89] found id: ""
	I1017 20:01:40.047966  595370 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:40.064078  595370 out.go:203] 
	W1017 20:01:40.067245  595370 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:40.067276  595370 out.go:285] * 
	* 
	W1017 20:01:40.074795  595370 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:40.077698  595370 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.30s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-rg2kq" [831c55af-e375-441e-bc69-0cebb42d4553] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004691398s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-948763 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-948763 addons disable yakd --alsologtostderr -v=1: exit status 11 (266.339954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:01:34.573940  595310 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:01:34.574783  595310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:34.574840  595310 out.go:374] Setting ErrFile to fd 2...
	I1017 20:01:34.574862  595310 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:01:34.575233  595310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:01:34.575604  595310 mustload.go:65] Loading cluster: addons-948763
	I1017 20:01:34.576030  595310 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:34.576075  595310 addons.go:606] checking whether the cluster is paused
	I1017 20:01:34.576205  595310 config.go:182] Loaded profile config "addons-948763": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:01:34.576247  595310 host.go:66] Checking if "addons-948763" exists ...
	I1017 20:01:34.576744  595310 cli_runner.go:164] Run: docker container inspect addons-948763 --format={{.State.Status}}
	I1017 20:01:34.594428  595310 ssh_runner.go:195] Run: systemctl --version
	I1017 20:01:34.594483  595310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-948763
	I1017 20:01:34.612610  595310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/addons-948763/id_rsa Username:docker}
	I1017 20:01:34.717813  595310 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:01:34.717904  595310 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:01:34.751683  595310 cri.go:89] found id: "62f5238d05d6d3331bdcdfa32834e9ba24f88f15f7bea5a11b12d36607cd66b6"
	I1017 20:01:34.751707  595310 cri.go:89] found id: "d36d10bdfcbed8fa8b155064b07cac022adcfddf798b9d0d976929cab5badb7a"
	I1017 20:01:34.751712  595310 cri.go:89] found id: "c53a40a0c62b83bb91bae15c15d4519bd4e05c1368cf9377b9189caea7c6d931"
	I1017 20:01:34.751716  595310 cri.go:89] found id: "bcc361491949c41df60e8c0bb57306689ce5c84618d50fd8170c041f105be569"
	I1017 20:01:34.751720  595310 cri.go:89] found id: "ef912ed7d00d718bd73284a2ea63e38f4b2ff92ff2c7a9501d090beb30f3264e"
	I1017 20:01:34.751723  595310 cri.go:89] found id: "b9d12f5fbfa6ce0172e9c4d95efeb10c74a6004fb6bf233d21bad65576f0d053"
	I1017 20:01:34.751726  595310 cri.go:89] found id: "9bc6a4da19b29b1d0dd28c0da084e62c56a86ad652a2f98eb58037367f79845d"
	I1017 20:01:34.751729  595310 cri.go:89] found id: "840a4de138b52a747fb0424a513a8f135f222fd1c693c2901434457578394041"
	I1017 20:01:34.751753  595310 cri.go:89] found id: "63ee152cb5bc0f9d27b5cb39071d418e9dc29e173d893452dfe0edc3f80998df"
	I1017 20:01:34.751769  595310 cri.go:89] found id: "16b93abda111750b6bcfe1d4e99975e5a46a4e8f68ced7c10ac0f0ae0f9575fe"
	I1017 20:01:34.751774  595310 cri.go:89] found id: "44c18bb7ee5830bb8bf020ed915190eb9a86857958e75ee96e670816a40e2f57"
	I1017 20:01:34.751777  595310 cri.go:89] found id: "21d5cbb832a96f556421b6df7b5c046e2f5f5b04841a47a23a326d4a94015551"
	I1017 20:01:34.751781  595310 cri.go:89] found id: "64a714f55a7cccb0104accf13a9c90a9ca5de73c2a00c48c07cd4a3f1f15b9f7"
	I1017 20:01:34.751784  595310 cri.go:89] found id: "43e9813ed2b6717907e0ebbb069f00c3cce08ff91db081371a7c502236b08711"
	I1017 20:01:34.751788  595310 cri.go:89] found id: "6573cd1e55f004ed65b4347a6e4b6841ccd4912921ba3e885b0250724e2aecaf"
	I1017 20:01:34.751804  595310 cri.go:89] found id: "5a104d2bf0866f1196a24dfeecce2155004ddb38e71b74ebf8c5a3a63181df9a"
	I1017 20:01:34.751811  595310 cri.go:89] found id: "ae878718d77b69c3b65a6f60b8e50d8b96f2cd4a8ed3545f60bc68183c0ff220"
	I1017 20:01:34.751832  595310 cri.go:89] found id: "c2aef7690fa71a1130a0b11b64cc108c3395a46c26e356837886c939c15a1883"
	I1017 20:01:34.751837  595310 cri.go:89] found id: "1d1813b82c050eb036e8d36914463179efe7a1fba31d76df41ca8c1cb9cabd4b"
	I1017 20:01:34.751840  595310 cri.go:89] found id: "db1d8cdc9b83a37722883360cbe4dea536fb60d6f569b8a920482105824911c7"
	I1017 20:01:34.751845  595310 cri.go:89] found id: "e643c6e152656a195c0c3fce9485fcf427a043c8d0889a36e713233419bf3592"
	I1017 20:01:34.751852  595310 cri.go:89] found id: "fc4fe4ea2862e72a17790e6b436b6abd8e033fd040e882cea0076c03151f3bd2"
	I1017 20:01:34.751855  595310 cri.go:89] found id: "cf9507fdd5ef14c1593123dedb465f3e99973c6754cfd489921fcb22ec1320f7"
	I1017 20:01:34.751859  595310 cri.go:89] found id: ""
	I1017 20:01:34.751928  595310 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:01:34.767430  595310 out.go:203] 
	W1017 20:01:34.770257  595310 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:01:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:01:34.770276  595310 out.go:285] * 
	* 
	W1017 20:01:34.777549  595310 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:01:34.780591  595310 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-948763 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-787197 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-787197 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-c4vs6" [472cb1f6-2116-4a8a-a2b1-ffcdbff530d6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787197 -n functional-787197
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-17 20:17:34.787902739 +0000 UTC m=+1222.633864169
functional_test.go:1645: (dbg) Run:  kubectl --context functional-787197 describe po hello-node-connect-7d85dfc575-c4vs6 -n default
functional_test.go:1645: (dbg) kubectl --context functional-787197 describe po hello-node-connect-7d85dfc575-c4vs6 -n default:
Name:             hello-node-connect-7d85dfc575-c4vs6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-787197/192.168.49.2
Start Time:       Fri, 17 Oct 2025 20:07:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7lh4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z7lh4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c4vs6 to functional-787197
Normal   Pulling    6m53s (x5 over 9m58s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 9m58s)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m41s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-787197 logs hello-node-connect-7d85dfc575-c4vs6 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-787197 logs hello-node-connect-7d85dfc575-c4vs6 -n default: exit status 1 (96.857046ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-c4vs6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-787197 logs hello-node-connect-7d85dfc575-c4vs6 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-787197 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-c4vs6
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-787197/192.168.49.2
Start Time:       Fri, 17 Oct 2025 20:07:34 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7lh4 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z7lh4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c4vs6 to functional-787197
Normal   Pulling    6m54s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m54s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m42s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-787197 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-787197 logs -l app=hello-node-connect: exit status 1 (86.690928ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-c4vs6" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-787197 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-787197 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.212.51
IPs:                      10.109.212.51
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32492/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-787197
helpers_test.go:243: (dbg) docker inspect functional-787197:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47",
	        "Created": "2025-10-17T20:04:44.991570013Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 601827,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:04:45.121632212Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47/hostname",
	        "HostsPath": "/var/lib/docker/containers/8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47/hosts",
	        "LogPath": "/var/lib/docker/containers/8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47/8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47-json.log",
	        "Name": "/functional-787197",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-787197:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-787197",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8bafc17772df9c0f2b858bda751af7832f22b4f977967f1b728bcfb79213ff47",
	                "LowerDir": "/var/lib/docker/overlay2/56cb4e6da2a4c24faaee558a86f42253264fd2f90991878c60bb598ce8e65c07-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56cb4e6da2a4c24faaee558a86f42253264fd2f90991878c60bb598ce8e65c07/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56cb4e6da2a4c24faaee558a86f42253264fd2f90991878c60bb598ce8e65c07/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56cb4e6da2a4c24faaee558a86f42253264fd2f90991878c60bb598ce8e65c07/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-787197",
	                "Source": "/var/lib/docker/volumes/functional-787197/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-787197",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-787197",
	                "name.minikube.sigs.k8s.io": "functional-787197",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e0eddd467367721c8007e8ad9e309fa6aad1de2d9ce51292b7f1536e8f54d659",
	            "SandboxKey": "/var/run/docker/netns/e0eddd467367",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-787197": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:da:36:30:2e:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "667ce9ccc03e839d69af93b1aa8c09b290eedc1a790ed46cc26e842c832a180d",
	                    "EndpointID": "8fde368a7a8cc6bb0fd8888bca89a4cd95486867d5a0b34ff358ca69379b2fa0",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-787197",
	                        "8bafc17772df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-787197 -n functional-787197
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 logs -n 25: (1.434519774s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-787197 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                    │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                        │ minikube          │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ kubectl │ functional-787197 kubectl -- --context functional-787197 get pods                                                          │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:06 UTC │
	│ start   │ -p functional-787197 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                   │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:06 UTC │ 17 Oct 25 20:07 UTC │
	│ service │ invalid-svc -p functional-787197                                                                                           │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ cp      │ functional-787197 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                         │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ config  │ functional-787197 config unset cpus                                                                                        │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ config  │ functional-787197 config get cpus                                                                                          │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ config  │ functional-787197 config set cpus 2                                                                                        │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ config  │ functional-787197 config get cpus                                                                                          │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ config  │ functional-787197 config unset cpus                                                                                        │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ ssh     │ functional-787197 ssh -n functional-787197 sudo cat /home/docker/cp-test.txt                                               │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ config  │ functional-787197 config get cpus                                                                                          │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ ssh     │ functional-787197 ssh echo hello                                                                                           │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ cp      │ functional-787197 cp functional-787197:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4179430158/001/cp-test.txt │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ ssh     │ functional-787197 ssh cat /etc/hostname                                                                                    │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ ssh     │ functional-787197 ssh -n functional-787197 sudo cat /home/docker/cp-test.txt                                               │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ tunnel  │ functional-787197 tunnel --alsologtostderr                                                                                 │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ tunnel  │ functional-787197 tunnel --alsologtostderr                                                                                 │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ cp      │ functional-787197 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                  │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ ssh     │ functional-787197 ssh -n functional-787197 sudo cat /tmp/does/not/exist/cp-test.txt                                        │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ tunnel  │ functional-787197 tunnel --alsologtostderr                                                                                 │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │                     │
	│ addons  │ functional-787197 addons list                                                                                              │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	│ addons  │ functional-787197 addons list -o json                                                                                      │ functional-787197 │ jenkins │ v1.37.0 │ 17 Oct 25 20:07 UTC │ 17 Oct 25 20:07 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:06:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:06:38.568338  605972 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:06:38.568444  605972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:06:38.568448  605972 out.go:374] Setting ErrFile to fd 2...
	I1017 20:06:38.568451  605972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:06:38.568699  605972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:06:38.569049  605972 out.go:368] Setting JSON to false
	I1017 20:06:38.569910  605972 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10144,"bootTime":1760721454,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:06:38.569965  605972 start.go:141] virtualization:  
	I1017 20:06:38.573568  605972 out.go:179] * [functional-787197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:06:38.576772  605972 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:06:38.576971  605972 notify.go:220] Checking for updates...
	I1017 20:06:38.580495  605972 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:06:38.583632  605972 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:06:38.586876  605972 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:06:38.589844  605972 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:06:38.592716  605972 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:06:38.596121  605972 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:06:38.596243  605972 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:06:38.629249  605972 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:06:38.629344  605972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:06:38.688321  605972 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-17 20:06:38.679537256 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:06:38.688415  605972 docker.go:318] overlay module found
	I1017 20:06:38.691490  605972 out.go:179] * Using the docker driver based on existing profile
	I1017 20:06:38.694329  605972 start.go:305] selected driver: docker
	I1017 20:06:38.694337  605972 start.go:925] validating driver "docker" against &{Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:06:38.694437  605972 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:06:38.694550  605972 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:06:38.756472  605972 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-10-17 20:06:38.744746298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:06:38.756887  605972 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:06:38.756919  605972 cni.go:84] Creating CNI manager for ""
	I1017 20:06:38.756973  605972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:06:38.757013  605972 start.go:349] cluster config:
	{Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:06:38.761870  605972 out.go:179] * Starting "functional-787197" primary control-plane node in "functional-787197" cluster
	I1017 20:06:38.764730  605972 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:06:38.767539  605972 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:06:38.770329  605972 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:06:38.770375  605972 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:06:38.770383  605972 cache.go:58] Caching tarball of preloaded images
	I1017 20:06:38.770402  605972 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:06:38.770483  605972 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:06:38.770493  605972 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:06:38.770596  605972 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/config.json ...
	I1017 20:06:38.790541  605972 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:06:38.790552  605972 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:06:38.790563  605972 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:06:38.790584  605972 start.go:360] acquireMachinesLock for functional-787197: {Name:mk56658ad6db17a53142eea0d33fb8459bc00fef Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:06:38.790637  605972 start.go:364] duration metric: took 37.055µs to acquireMachinesLock for "functional-787197"
	I1017 20:06:38.790655  605972 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:06:38.790660  605972 fix.go:54] fixHost starting: 
	I1017 20:06:38.790904  605972 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
	I1017 20:06:38.807628  605972 fix.go:112] recreateIfNeeded on functional-787197: state=Running err=<nil>
	W1017 20:06:38.807655  605972 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:06:38.810943  605972 out.go:252] * Updating the running docker "functional-787197" container ...
	I1017 20:06:38.810970  605972 machine.go:93] provisionDockerMachine start ...
	I1017 20:06:38.811059  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:38.828788  605972 main.go:141] libmachine: Using SSH client type: native
	I1017 20:06:38.829098  605972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33522 <nil> <nil>}
	I1017 20:06:38.829106  605972 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:06:38.979019  605972 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-787197
	
	I1017 20:06:38.979049  605972 ubuntu.go:182] provisioning hostname "functional-787197"
	I1017 20:06:38.979145  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:38.997057  605972 main.go:141] libmachine: Using SSH client type: native
	I1017 20:06:38.997355  605972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33522 <nil> <nil>}
	I1017 20:06:38.997363  605972 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-787197 && echo "functional-787197" | sudo tee /etc/hostname
	I1017 20:06:39.156108  605972 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-787197
	
	I1017 20:06:39.156189  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:39.174453  605972 main.go:141] libmachine: Using SSH client type: native
	I1017 20:06:39.174756  605972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33522 <nil> <nil>}
	I1017 20:06:39.174769  605972 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-787197' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-787197/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-787197' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:06:39.323270  605972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:06:39.323286  605972 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:06:39.323302  605972 ubuntu.go:190] setting up certificates
	I1017 20:06:39.323310  605972 provision.go:84] configureAuth start
	I1017 20:06:39.323366  605972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787197
	I1017 20:06:39.341415  605972 provision.go:143] copyHostCerts
	I1017 20:06:39.341471  605972 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:06:39.341487  605972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:06:39.341568  605972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:06:39.341690  605972 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:06:39.341695  605972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:06:39.341733  605972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:06:39.341795  605972 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:06:39.341798  605972 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:06:39.341822  605972 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:06:39.341883  605972 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.functional-787197 san=[127.0.0.1 192.168.49.2 functional-787197 localhost minikube]
	I1017 20:06:39.854484  605972 provision.go:177] copyRemoteCerts
	I1017 20:06:39.854538  605972 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:06:39.854605  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:39.873002  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:06:39.979895  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:06:39.998357  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 20:06:40.057304  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:06:40.078636  605972 provision.go:87] duration metric: took 755.30256ms to configureAuth
	I1017 20:06:40.078655  605972 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:06:40.078867  605972 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:06:40.078967  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:40.098973  605972 main.go:141] libmachine: Using SSH client type: native
	I1017 20:06:40.099329  605972 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33522 <nil> <nil>}
	I1017 20:06:40.099342  605972 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:06:45.531964  605972 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:06:45.531976  605972 machine.go:96] duration metric: took 6.721000032s to provisionDockerMachine
	I1017 20:06:45.531985  605972 start.go:293] postStartSetup for "functional-787197" (driver="docker")
	I1017 20:06:45.531995  605972 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:06:45.532075  605972 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:06:45.532126  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:45.550842  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:06:45.655256  605972 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:06:45.658817  605972 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:06:45.658837  605972 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:06:45.658846  605972 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:06:45.658903  605972 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:06:45.658977  605972 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:06:45.659053  605972 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/test/nested/copy/586172/hosts -> hosts in /etc/test/nested/copy/586172
	I1017 20:06:45.659136  605972 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/586172
	I1017 20:06:45.666778  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:06:45.684932  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/test/nested/copy/586172/hosts --> /etc/test/nested/copy/586172/hosts (40 bytes)
	I1017 20:06:45.702716  605972 start.go:296] duration metric: took 170.715657ms for postStartSetup
	I1017 20:06:45.702787  605972 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:06:45.702824  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:45.718858  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:06:45.824296  605972 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:06:45.828743  605972 fix.go:56] duration metric: took 7.038075658s for fixHost
	I1017 20:06:45.828757  605972 start.go:83] releasing machines lock for "functional-787197", held for 7.038113591s
	I1017 20:06:45.828834  605972 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-787197
	I1017 20:06:45.845777  605972 ssh_runner.go:195] Run: cat /version.json
	I1017 20:06:45.845811  605972 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:06:45.845827  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:45.845860  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:06:45.866572  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:06:45.872698  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:06:46.054838  605972 ssh_runner.go:195] Run: systemctl --version
	I1017 20:06:46.061551  605972 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:06:46.102260  605972 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:06:46.107377  605972 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:06:46.107439  605972 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:06:46.115369  605972 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:06:46.115383  605972 start.go:495] detecting cgroup driver to use...
	I1017 20:06:46.115413  605972 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:06:46.115459  605972 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:06:46.131367  605972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:06:46.144393  605972 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:06:46.144448  605972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:06:46.160013  605972 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:06:46.173994  605972 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:06:46.316436  605972 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:06:46.457365  605972 docker.go:234] disabling docker service ...
	I1017 20:06:46.457424  605972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:06:46.473657  605972 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:06:46.488390  605972 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:06:46.621025  605972 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:06:46.753337  605972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:06:46.766354  605972 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:06:46.781329  605972 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:06:46.781385  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.790628  605972 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:06:46.790687  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.799852  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.809018  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.818079  605972 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:06:46.826461  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.835754  605972 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.845179  605972 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:06:46.854887  605972 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:06:46.862386  605972 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:06:46.869761  605972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:06:47.009063  605972 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:06:51.556512  605972 ssh_runner.go:235] Completed: sudo systemctl restart crio: (4.54742649s)
	I1017 20:06:51.556528  605972 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:06:51.556581  605972 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:06:51.561587  605972 start.go:563] Will wait 60s for crictl version
	I1017 20:06:51.561643  605972 ssh_runner.go:195] Run: which crictl
	I1017 20:06:51.565283  605972 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:06:51.593469  605972 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:06:51.593558  605972 ssh_runner.go:195] Run: crio --version
	I1017 20:06:51.623465  605972 ssh_runner.go:195] Run: crio --version
	I1017 20:06:51.655557  605972 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:06:51.658478  605972 cli_runner.go:164] Run: docker network inspect functional-787197 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:06:51.674450  605972 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:06:51.681605  605972 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1017 20:06:51.684384  605972 kubeadm.go:883] updating cluster {Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:06:51.684517  605972 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:06:51.684599  605972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:06:51.721571  605972 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:06:51.721590  605972 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:06:51.721647  605972 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:06:51.747200  605972 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:06:51.747212  605972 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:06:51.747219  605972 kubeadm.go:934] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1017 20:06:51.747314  605972 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-787197 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:06:51.747389  605972 ssh_runner.go:195] Run: crio config
	I1017 20:06:51.803095  605972 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1017 20:06:51.803144  605972 cni.go:84] Creating CNI manager for ""
	I1017 20:06:51.803153  605972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:06:51.803166  605972 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:06:51.803201  605972 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-787197 NodeName:functional-787197 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:06:51.803322  605972 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-787197"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:06:51.803390  605972 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:06:51.811073  605972 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:06:51.811157  605972 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 20:06:51.818731  605972 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 20:06:51.831733  605972 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:06:51.844768  605972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1017 20:06:51.857679  605972 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1017 20:06:51.861520  605972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:06:51.988257  605972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:06:52.002145  605972 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197 for IP: 192.168.49.2
	I1017 20:06:52.002156  605972 certs.go:195] generating shared ca certs ...
	I1017 20:06:52.002172  605972 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:06:52.002381  605972 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:06:52.002438  605972 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:06:52.002444  605972 certs.go:257] generating profile certs ...
	I1017 20:06:52.002525  605972 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.key
	I1017 20:06:52.002570  605972 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/apiserver.key.dd6d00b7
	I1017 20:06:52.002606  605972 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/proxy-client.key
	I1017 20:06:52.002717  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:06:52.002745  605972 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:06:52.002757  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:06:52.002780  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:06:52.002802  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:06:52.002823  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:06:52.002863  605972 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:06:52.003542  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:06:52.029116  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:06:52.049596  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:06:52.067868  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:06:52.086133  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 20:06:52.104206  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:06:52.121375  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:06:52.138827  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 20:06:52.156289  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:06:52.173372  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:06:52.190569  605972 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:06:52.207799  605972 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:06:52.219782  605972 ssh_runner.go:195] Run: openssl version
	I1017 20:06:52.226216  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:06:52.234532  605972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:52.238148  605972 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:52.238205  605972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:06:52.279029  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:06:52.287027  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:06:52.295340  605972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:06:52.299027  605972 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:06:52.299081  605972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:06:52.340305  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:06:52.348488  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:06:52.361387  605972 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:06:52.365123  605972 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:06:52.365188  605972 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:06:52.405873  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:06:52.413738  605972 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:06:52.417427  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:06:52.458140  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:06:52.499228  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:06:52.541924  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:06:52.583515  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:06:52.624482  605972 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:06:52.665224  605972 kubeadm.go:400] StartCluster: {Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:06:52.665304  605972 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:06:52.665407  605972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:06:52.695753  605972 cri.go:89] found id: "12f933853061ca2dadad7fc9f7def334451ad1550aa0030801c98f534bcc2615"
	I1017 20:06:52.695764  605972 cri.go:89] found id: "6f20794ffbb2ad9f7cd4f31f9c06d2b0724806510e4bac6230bca73c2b621719"
	I1017 20:06:52.695767  605972 cri.go:89] found id: "0fd593fa3a0b3e9a3e386c44218575a413ae8d5823fbcf8cb87735253b9a9448"
	I1017 20:06:52.695770  605972 cri.go:89] found id: "e7d85d3b3b7f734ea07b92ddfd5ee45eae92d48b4e4d38676d91c760d0dce2e5"
	I1017 20:06:52.695773  605972 cri.go:89] found id: "c770304c166d9f7a84d2de0378b3ddccb122d84903855636f5887bd0fb334f54"
	I1017 20:06:52.695776  605972 cri.go:89] found id: "692df764ebaf13fff3a4a71db5e869cb81b663d5a0793c03a2e540eca2821f96"
	I1017 20:06:52.695778  605972 cri.go:89] found id: "8b8d5b782b743f58c4fd40dbc1fa529fde00da78875d488c6c2f0499a308e692"
	I1017 20:06:52.695780  605972 cri.go:89] found id: "9cece6f4f5345aa2aa26f5b67c16f5dab463958719024b2b58d29321f8684c26"
	I1017 20:06:52.695783  605972 cri.go:89] found id: "9748d2a593b567df50863438cc8e63bcdf7e97b1d21c07cfd491b041a0017106"
	I1017 20:06:52.695790  605972 cri.go:89] found id: "8dd1e1be0cb67f630ceaff7a045de2c7366ab31f1370d4ffd8929384df7d57a5"
	I1017 20:06:52.695792  605972 cri.go:89] found id: "af7fc2bc689fe650830a5ea2491a59ad9934330c6e3e02f8a50f586406f43366"
	I1017 20:06:52.695804  605972 cri.go:89] found id: "d3771324b0a04543c05fd1086b0893f176971f3ead2c60464d60ad6c4a4739ed"
	I1017 20:06:52.695807  605972 cri.go:89] found id: "94df1719eff4487bf431f9bb8cddc82980005168e7139643b021a221614fa524"
	I1017 20:06:52.695810  605972 cri.go:89] found id: "3f6f0ea12a6bfc6976e922e4233b8e72b9775a89722cac42a5ded47d6c37b133"
	I1017 20:06:52.695812  605972 cri.go:89] found id: "ac655f9e9ba7a9cdbd5bf3959149548b215c579232b3d00b674ba9d2bb3cf1bb"
	I1017 20:06:52.695817  605972 cri.go:89] found id: "7449b3bd66cd7c8e9a1a8edf126cadb3d6bca519303fe3431c90c32514fe4eaf"
	I1017 20:06:52.695819  605972 cri.go:89] found id: ""
	I1017 20:06:52.695870  605972 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:06:52.706674  605972 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:06:52Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:06:52.706747  605972 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:06:52.714309  605972 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:06:52.714318  605972 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:06:52.714370  605972 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:06:52.721740  605972 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:06:52.722251  605972 kubeconfig.go:125] found "functional-787197" server: "https://192.168.49.2:8441"
	I1017 20:06:52.723628  605972 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:06:52.731205  605972 kubeadm.go:644] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-10-17 20:04:54.929777721 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-10-17 20:06:51.851414355 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1017 20:06:52.731214  605972 kubeadm.go:1160] stopping kube-system containers ...
	I1017 20:06:52.731225  605972 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1017 20:06:52.731283  605972 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:06:52.763746  605972 cri.go:89] found id: "12f933853061ca2dadad7fc9f7def334451ad1550aa0030801c98f534bcc2615"
	I1017 20:06:52.763757  605972 cri.go:89] found id: "6f20794ffbb2ad9f7cd4f31f9c06d2b0724806510e4bac6230bca73c2b621719"
	I1017 20:06:52.763760  605972 cri.go:89] found id: "0fd593fa3a0b3e9a3e386c44218575a413ae8d5823fbcf8cb87735253b9a9448"
	I1017 20:06:52.763762  605972 cri.go:89] found id: "e7d85d3b3b7f734ea07b92ddfd5ee45eae92d48b4e4d38676d91c760d0dce2e5"
	I1017 20:06:52.763765  605972 cri.go:89] found id: "c770304c166d9f7a84d2de0378b3ddccb122d84903855636f5887bd0fb334f54"
	I1017 20:06:52.763767  605972 cri.go:89] found id: "692df764ebaf13fff3a4a71db5e869cb81b663d5a0793c03a2e540eca2821f96"
	I1017 20:06:52.763775  605972 cri.go:89] found id: "8b8d5b782b743f58c4fd40dbc1fa529fde00da78875d488c6c2f0499a308e692"
	I1017 20:06:52.763779  605972 cri.go:89] found id: "9cece6f4f5345aa2aa26f5b67c16f5dab463958719024b2b58d29321f8684c26"
	I1017 20:06:52.763781  605972 cri.go:89] found id: "9748d2a593b567df50863438cc8e63bcdf7e97b1d21c07cfd491b041a0017106"
	I1017 20:06:52.763787  605972 cri.go:89] found id: "8dd1e1be0cb67f630ceaff7a045de2c7366ab31f1370d4ffd8929384df7d57a5"
	I1017 20:06:52.763801  605972 cri.go:89] found id: "af7fc2bc689fe650830a5ea2491a59ad9934330c6e3e02f8a50f586406f43366"
	I1017 20:06:52.763803  605972 cri.go:89] found id: "d3771324b0a04543c05fd1086b0893f176971f3ead2c60464d60ad6c4a4739ed"
	I1017 20:06:52.763805  605972 cri.go:89] found id: "94df1719eff4487bf431f9bb8cddc82980005168e7139643b021a221614fa524"
	I1017 20:06:52.763807  605972 cri.go:89] found id: "3f6f0ea12a6bfc6976e922e4233b8e72b9775a89722cac42a5ded47d6c37b133"
	I1017 20:06:52.763809  605972 cri.go:89] found id: "ac655f9e9ba7a9cdbd5bf3959149548b215c579232b3d00b674ba9d2bb3cf1bb"
	I1017 20:06:52.763813  605972 cri.go:89] found id: "7449b3bd66cd7c8e9a1a8edf126cadb3d6bca519303fe3431c90c32514fe4eaf"
	I1017 20:06:52.763815  605972 cri.go:89] found id: ""
	I1017 20:06:52.763819  605972 cri.go:252] Stopping containers: [12f933853061ca2dadad7fc9f7def334451ad1550aa0030801c98f534bcc2615 6f20794ffbb2ad9f7cd4f31f9c06d2b0724806510e4bac6230bca73c2b621719 0fd593fa3a0b3e9a3e386c44218575a413ae8d5823fbcf8cb87735253b9a9448 e7d85d3b3b7f734ea07b92ddfd5ee45eae92d48b4e4d38676d91c760d0dce2e5 c770304c166d9f7a84d2de0378b3ddccb122d84903855636f5887bd0fb334f54 692df764ebaf13fff3a4a71db5e869cb81b663d5a0793c03a2e540eca2821f96 8b8d5b782b743f58c4fd40dbc1fa529fde00da78875d488c6c2f0499a308e692 9cece6f4f5345aa2aa26f5b67c16f5dab463958719024b2b58d29321f8684c26 9748d2a593b567df50863438cc8e63bcdf7e97b1d21c07cfd491b041a0017106 8dd1e1be0cb67f630ceaff7a045de2c7366ab31f1370d4ffd8929384df7d57a5 af7fc2bc689fe650830a5ea2491a59ad9934330c6e3e02f8a50f586406f43366 d3771324b0a04543c05fd1086b0893f176971f3ead2c60464d60ad6c4a4739ed 94df1719eff4487bf431f9bb8cddc82980005168e7139643b021a221614fa524 3f6f0ea12a6bfc6976e922e4233b8e72b9775a89722cac42a5ded47d6c37b133 ac655f9e9ba7a9cdbd5bf3959149548b215c57923
2b3d00b674ba9d2bb3cf1bb 7449b3bd66cd7c8e9a1a8edf126cadb3d6bca519303fe3431c90c32514fe4eaf]
	I1017 20:06:52.763878  605972 ssh_runner.go:195] Run: which crictl
	I1017 20:06:52.767653  605972 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 12f933853061ca2dadad7fc9f7def334451ad1550aa0030801c98f534bcc2615 6f20794ffbb2ad9f7cd4f31f9c06d2b0724806510e4bac6230bca73c2b621719 0fd593fa3a0b3e9a3e386c44218575a413ae8d5823fbcf8cb87735253b9a9448 e7d85d3b3b7f734ea07b92ddfd5ee45eae92d48b4e4d38676d91c760d0dce2e5 c770304c166d9f7a84d2de0378b3ddccb122d84903855636f5887bd0fb334f54 692df764ebaf13fff3a4a71db5e869cb81b663d5a0793c03a2e540eca2821f96 8b8d5b782b743f58c4fd40dbc1fa529fde00da78875d488c6c2f0499a308e692 9cece6f4f5345aa2aa26f5b67c16f5dab463958719024b2b58d29321f8684c26 9748d2a593b567df50863438cc8e63bcdf7e97b1d21c07cfd491b041a0017106 8dd1e1be0cb67f630ceaff7a045de2c7366ab31f1370d4ffd8929384df7d57a5 af7fc2bc689fe650830a5ea2491a59ad9934330c6e3e02f8a50f586406f43366 d3771324b0a04543c05fd1086b0893f176971f3ead2c60464d60ad6c4a4739ed 94df1719eff4487bf431f9bb8cddc82980005168e7139643b021a221614fa524 3f6f0ea12a6bfc6976e922e4233b8e72b9775a89722cac42a5ded47d6c37b133 ac655f
9e9ba7a9cdbd5bf3959149548b215c579232b3d00b674ba9d2bb3cf1bb 7449b3bd66cd7c8e9a1a8edf126cadb3d6bca519303fe3431c90c32514fe4eaf
	I1017 20:06:52.866964  605972 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1017 20:06:52.974310  605972 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 20:06:52.982318  605972 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5631 Oct 17 20:04 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Oct 17 20:05 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Oct 17 20:05 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Oct 17 20:05 /etc/kubernetes/scheduler.conf
	
	I1017 20:06:52.982374  605972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1017 20:06:52.990086  605972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1017 20:06:52.997540  605972 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:06:52.997610  605972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 20:06:53.005811  605972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1017 20:06:53.014385  605972 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:06:53.014448  605972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 20:06:53.022163  605972 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1017 20:06:53.029957  605972 kubeadm.go:163] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:06:53.030010  605972 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 20:06:53.037847  605972 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 20:06:53.045705  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:06:53.101043  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:06:55.667836  605972 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.566769571s)
	I1017 20:06:55.667901  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:06:55.905420  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:06:55.973931  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:06:56.071273  605972 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:06:56.071339  605972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:06:56.572415  605972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:06:57.072329  605972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:06:57.088547  605972 api_server.go:72] duration metric: took 1.017285043s to wait for apiserver process to appear ...
	I1017 20:06:57.088561  605972 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:06:57.088581  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:00.820685  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 20:07:00.820704  605972 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 20:07:00.820716  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:00.993317  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:07:00.993336  605972 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:07:01.089515  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:01.110313  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:07:01.110329  605972 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:07:01.588690  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:01.608851  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 20:07:01.608869  605972 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 20:07:02.089345  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:02.104247  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1017 20:07:02.121471  605972 api_server.go:141] control plane version: v1.34.1
	I1017 20:07:02.121496  605972 api_server.go:131] duration metric: took 5.032929922s to wait for apiserver health ...
	I1017 20:07:02.121504  605972 cni.go:84] Creating CNI manager for ""
	I1017 20:07:02.121509  605972 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:07:02.125397  605972 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 20:07:02.128462  605972 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 20:07:02.132586  605972 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 20:07:02.132597  605972 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 20:07:02.146829  605972 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 20:07:02.588011  605972 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:07:02.591395  605972 system_pods.go:59] 8 kube-system pods found
	I1017 20:07:02.591417  605972 system_pods.go:61] "coredns-66bc5c9577-mj8cn" [923ebc0c-9b88-4f6c-8797-eb6004afe45c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:07:02.591424  605972 system_pods.go:61] "etcd-functional-787197" [c6507066-c935-421f-a07e-3fe7748413ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:07:02.591429  605972 system_pods.go:61] "kindnet-k6gn8" [a81ef6c2-0b18-4666-9cc3-12b9cfc7c04a] Running
	I1017 20:07:02.591435  605972 system_pods.go:61] "kube-apiserver-functional-787197" [9c964fd9-8727-4304-8e80-5d3eb14b4246] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:07:02.591441  605972 system_pods.go:61] "kube-controller-manager-functional-787197" [9a045939-82d2-4bd0-a311-13b385d9a986] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:07:02.591446  605972 system_pods.go:61] "kube-proxy-qj9ps" [08dfd00a-b2ee-4e6b-bae7-a2203adbf66f] Running
	I1017 20:07:02.591453  605972 system_pods.go:61] "kube-scheduler-functional-787197" [9406ec70-63ba-434a-b5f7-7b44126b2958] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:07:02.591456  605972 system_pods.go:61] "storage-provisioner" [7f4da848-4070-4e4c-ab32-a57a40c2a7be] Running
	I1017 20:07:02.591462  605972 system_pods.go:74] duration metric: took 3.439539ms to wait for pod list to return data ...
	I1017 20:07:02.591469  605972 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:07:02.594151  605972 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:07:02.594169  605972 node_conditions.go:123] node cpu capacity is 2
	I1017 20:07:02.594179  605972 node_conditions.go:105] duration metric: took 2.706762ms to run NodePressure ...
	I1017 20:07:02.594242  605972 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1017 20:07:02.852162  605972 kubeadm.go:728] waiting for restarted kubelet to initialise ...
	I1017 20:07:02.858421  605972 kubeadm.go:743] kubelet initialised
	I1017 20:07:02.858431  605972 kubeadm.go:744] duration metric: took 6.257498ms waiting for restarted kubelet to initialise ...
	I1017 20:07:02.858462  605972 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 20:07:02.867986  605972 ops.go:34] apiserver oom_adj: -16
	I1017 20:07:02.868001  605972 kubeadm.go:601] duration metric: took 10.153674666s to restartPrimaryControlPlane
	I1017 20:07:02.868009  605972 kubeadm.go:402] duration metric: took 10.20279536s to StartCluster
	I1017 20:07:02.868023  605972 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:02.868094  605972 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:07:02.868753  605972 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:07:02.869212  605972 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:07:02.869027  605972 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:07:02.869314  605972 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:07:02.869372  605972 addons.go:69] Setting storage-provisioner=true in profile "functional-787197"
	I1017 20:07:02.869385  605972 addons.go:238] Setting addon storage-provisioner=true in "functional-787197"
	W1017 20:07:02.869391  605972 addons.go:247] addon storage-provisioner should already be in state true
	I1017 20:07:02.869409  605972 host.go:66] Checking if "functional-787197" exists ...
	I1017 20:07:02.869862  605972 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
	I1017 20:07:02.870290  605972 addons.go:69] Setting default-storageclass=true in profile "functional-787197"
	I1017 20:07:02.870304  605972 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-787197"
	I1017 20:07:02.870582  605972 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
	I1017 20:07:02.878291  605972 out.go:179] * Verifying Kubernetes components...
	I1017 20:07:02.883579  605972 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:07:02.908688  605972 addons.go:238] Setting addon default-storageclass=true in "functional-787197"
	W1017 20:07:02.908699  605972 addons.go:247] addon default-storageclass should already be in state true
	I1017 20:07:02.908721  605972 host.go:66] Checking if "functional-787197" exists ...
	I1017 20:07:02.909122  605972 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
	I1017 20:07:02.917545  605972 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 20:07:02.921629  605972 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:02.921643  605972 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 20:07:02.921714  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:07:02.943026  605972 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:02.943040  605972 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 20:07:02.948376  605972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:07:02.965108  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:07:02.984530  605972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:07:03.129683  605972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 20:07:03.164309  605972 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:07:03.187051  605972 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 20:07:03.971074  605972 node_ready.go:35] waiting up to 6m0s for node "functional-787197" to be "Ready" ...
	I1017 20:07:03.974688  605972 node_ready.go:49] node "functional-787197" is "Ready"
	I1017 20:07:03.974716  605972 node_ready.go:38] duration metric: took 3.587693ms for node "functional-787197" to be "Ready" ...
	I1017 20:07:03.974728  605972 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:07:03.974796  605972 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:07:03.981537  605972 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 20:07:03.984496  605972 addons.go:514] duration metric: took 1.115169401s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 20:07:03.987983  605972 api_server.go:72] duration metric: took 1.11872059s to wait for apiserver process to appear ...
	I1017 20:07:03.988010  605972 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:07:03.988027  605972 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1017 20:07:03.998255  605972 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1017 20:07:03.999174  605972 api_server.go:141] control plane version: v1.34.1
	I1017 20:07:03.999186  605972 api_server.go:131] duration metric: took 11.171019ms to wait for apiserver health ...
	I1017 20:07:03.999194  605972 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:07:04.002292  605972 system_pods.go:59] 8 kube-system pods found
	I1017 20:07:04.002314  605972 system_pods.go:61] "coredns-66bc5c9577-mj8cn" [923ebc0c-9b88-4f6c-8797-eb6004afe45c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:07:04.002322  605972 system_pods.go:61] "etcd-functional-787197" [c6507066-c935-421f-a07e-3fe7748413ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:07:04.002327  605972 system_pods.go:61] "kindnet-k6gn8" [a81ef6c2-0b18-4666-9cc3-12b9cfc7c04a] Running
	I1017 20:07:04.002333  605972 system_pods.go:61] "kube-apiserver-functional-787197" [9c964fd9-8727-4304-8e80-5d3eb14b4246] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:07:04.002339  605972 system_pods.go:61] "kube-controller-manager-functional-787197" [9a045939-82d2-4bd0-a311-13b385d9a986] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:07:04.002343  605972 system_pods.go:61] "kube-proxy-qj9ps" [08dfd00a-b2ee-4e6b-bae7-a2203adbf66f] Running
	I1017 20:07:04.002348  605972 system_pods.go:61] "kube-scheduler-functional-787197" [9406ec70-63ba-434a-b5f7-7b44126b2958] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:07:04.002356  605972 system_pods.go:61] "storage-provisioner" [7f4da848-4070-4e4c-ab32-a57a40c2a7be] Running
	I1017 20:07:04.002362  605972 system_pods.go:74] duration metric: took 3.163072ms to wait for pod list to return data ...
	I1017 20:07:04.002369  605972 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:07:04.005602  605972 default_sa.go:45] found service account: "default"
	I1017 20:07:04.005618  605972 default_sa.go:55] duration metric: took 3.244501ms for default service account to be created ...
	I1017 20:07:04.005627  605972 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:07:04.009144  605972 system_pods.go:86] 8 kube-system pods found
	I1017 20:07:04.009165  605972 system_pods.go:89] "coredns-66bc5c9577-mj8cn" [923ebc0c-9b88-4f6c-8797-eb6004afe45c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:07:04.009174  605972 system_pods.go:89] "etcd-functional-787197" [c6507066-c935-421f-a07e-3fe7748413ca] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:07:04.009178  605972 system_pods.go:89] "kindnet-k6gn8" [a81ef6c2-0b18-4666-9cc3-12b9cfc7c04a] Running
	I1017 20:07:04.009185  605972 system_pods.go:89] "kube-apiserver-functional-787197" [9c964fd9-8727-4304-8e80-5d3eb14b4246] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:07:04.009191  605972 system_pods.go:89] "kube-controller-manager-functional-787197" [9a045939-82d2-4bd0-a311-13b385d9a986] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:07:04.009195  605972 system_pods.go:89] "kube-proxy-qj9ps" [08dfd00a-b2ee-4e6b-bae7-a2203adbf66f] Running
	I1017 20:07:04.009210  605972 system_pods.go:89] "kube-scheduler-functional-787197" [9406ec70-63ba-434a-b5f7-7b44126b2958] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 20:07:04.009213  605972 system_pods.go:89] "storage-provisioner" [7f4da848-4070-4e4c-ab32-a57a40c2a7be] Running
	I1017 20:07:04.009218  605972 system_pods.go:126] duration metric: took 3.586807ms to wait for k8s-apps to be running ...
	I1017 20:07:04.009226  605972 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:07:04.009292  605972 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:07:04.023067  605972 system_svc.go:56] duration metric: took 13.830404ms WaitForService to wait for kubelet
	I1017 20:07:04.023085  605972 kubeadm.go:586] duration metric: took 1.153826327s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:07:04.023139  605972 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:07:04.026130  605972 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:07:04.026146  605972 node_conditions.go:123] node cpu capacity is 2
	I1017 20:07:04.026155  605972 node_conditions.go:105] duration metric: took 3.011316ms to run NodePressure ...
	I1017 20:07:04.026167  605972 start.go:241] waiting for startup goroutines ...
	I1017 20:07:04.026174  605972 start.go:246] waiting for cluster config update ...
	I1017 20:07:04.026183  605972 start.go:255] writing updated cluster config ...
	I1017 20:07:04.026467  605972 ssh_runner.go:195] Run: rm -f paused
	I1017 20:07:04.030262  605972 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:07:04.033832  605972 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mj8cn" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:07:06.039756  605972 pod_ready.go:104] pod "coredns-66bc5c9577-mj8cn" is not "Ready", error: <nil>
	W1017 20:07:08.040429  605972 pod_ready.go:104] pod "coredns-66bc5c9577-mj8cn" is not "Ready", error: <nil>
	I1017 20:07:09.540324  605972 pod_ready.go:94] pod "coredns-66bc5c9577-mj8cn" is "Ready"
	I1017 20:07:09.540338  605972 pod_ready.go:86] duration metric: took 5.50649244s for pod "coredns-66bc5c9577-mj8cn" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:09.543390  605972 pod_ready.go:83] waiting for pod "etcd-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:07:11.549812  605972 pod_ready.go:104] pod "etcd-functional-787197" is not "Ready", error: <nil>
	I1017 20:07:12.548955  605972 pod_ready.go:94] pod "etcd-functional-787197" is "Ready"
	I1017 20:07:12.548969  605972 pod_ready.go:86] duration metric: took 3.005566004s for pod "etcd-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:12.551383  605972 pod_ready.go:83] waiting for pod "kube-apiserver-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:13.057111  605972 pod_ready.go:94] pod "kube-apiserver-functional-787197" is "Ready"
	I1017 20:07:13.057125  605972 pod_ready.go:86] duration metric: took 505.729283ms for pod "kube-apiserver-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:13.059858  605972 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 20:07:15.065907  605972 pod_ready.go:104] pod "kube-controller-manager-functional-787197" is not "Ready", error: <nil>
	I1017 20:07:16.065572  605972 pod_ready.go:94] pod "kube-controller-manager-functional-787197" is "Ready"
	I1017 20:07:16.065597  605972 pod_ready.go:86] duration metric: took 3.005716524s for pod "kube-controller-manager-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:16.068317  605972 pod_ready.go:83] waiting for pod "kube-proxy-qj9ps" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:16.073841  605972 pod_ready.go:94] pod "kube-proxy-qj9ps" is "Ready"
	I1017 20:07:16.073856  605972 pod_ready.go:86] duration metric: took 5.526519ms for pod "kube-proxy-qj9ps" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:16.076788  605972 pod_ready.go:83] waiting for pod "kube-scheduler-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:16.148639  605972 pod_ready.go:94] pod "kube-scheduler-functional-787197" is "Ready"
	I1017 20:07:16.148654  605972 pod_ready.go:86] duration metric: took 71.852129ms for pod "kube-scheduler-functional-787197" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 20:07:16.148664  605972 pod_ready.go:40] duration metric: took 12.118379201s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 20:07:16.205798  605972 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 20:07:16.208750  605972 out.go:179] * Done! kubectl is now configured to use "functional-787197" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 20:07:53 functional-787197 crio[3517]: time="2025-10-17T20:07:53.132103479Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-252zg Namespace:default ID:6e45e8412a69567d9b1365e51680eda8194889548c887fabf5fe7a228a6fc0e4 UID:e2eac087-f5dc-490c-948d-e0f751ec1108 NetNS:/var/run/netns/8277a7b7-e2d0-4a01-8444-ffd329acb3c3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cd70}] Aliases:map[]}"
	Oct 17 20:07:53 functional-787197 crio[3517]: time="2025-10-17T20:07:53.132268937Z" level=info msg="Checking pod default_hello-node-75c85bcc94-252zg for CNI network kindnet (type=ptp)"
	Oct 17 20:07:53 functional-787197 crio[3517]: time="2025-10-17T20:07:53.134827406Z" level=info msg="Ran pod sandbox 6e45e8412a69567d9b1365e51680eda8194889548c887fabf5fe7a228a6fc0e4 with infra container: default/hello-node-75c85bcc94-252zg/POD" id=50fbf6bb-4307-4505-819a-fcc705d6eed0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 20:07:53 functional-787197 crio[3517]: time="2025-10-17T20:07:53.147093152Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a09208d6-3d99-46a9-ae68-bdbe47d1db61 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.212984159Z" level=info msg="Stopping pod sandbox: c1b3f3625b5f186e64be5c4f6f2b64442a1aced55b49064cd9afa9af91aaf3e5" id=59aec35c-551d-4765-9c81-1f6d0899824d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.213041448Z" level=info msg="Stopped pod sandbox (already stopped): c1b3f3625b5f186e64be5c4f6f2b64442a1aced55b49064cd9afa9af91aaf3e5" id=59aec35c-551d-4765-9c81-1f6d0899824d name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.213415985Z" level=info msg="Removing pod sandbox: c1b3f3625b5f186e64be5c4f6f2b64442a1aced55b49064cd9afa9af91aaf3e5" id=185afa0b-1b5e-4542-863e-3ce654114660 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.217093829Z" level=info msg="Removed pod sandbox: c1b3f3625b5f186e64be5c4f6f2b64442a1aced55b49064cd9afa9af91aaf3e5" id=185afa0b-1b5e-4542-863e-3ce654114660 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.217713415Z" level=info msg="Stopping pod sandbox: 66377841d03e1bef1f2406e1c22de3c8699da8c9ef35bbbf8ab845a7f987ed5f" id=1dcda1e7-c407-46e3-9830-bef2defbc790 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.217766462Z" level=info msg="Stopped pod sandbox (already stopped): 66377841d03e1bef1f2406e1c22de3c8699da8c9ef35bbbf8ab845a7f987ed5f" id=1dcda1e7-c407-46e3-9830-bef2defbc790 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.21810108Z" level=info msg="Removing pod sandbox: 66377841d03e1bef1f2406e1c22de3c8699da8c9ef35bbbf8ab845a7f987ed5f" id=f1cfcb60-9a63-45e7-8131-b4ccef9d3421 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.221719264Z" level=info msg="Removed pod sandbox: 66377841d03e1bef1f2406e1c22de3c8699da8c9ef35bbbf8ab845a7f987ed5f" id=f1cfcb60-9a63-45e7-8131-b4ccef9d3421 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.222254172Z" level=info msg="Stopping pod sandbox: 7264f8fbd052450c753d9b8d91b923f9803fb2c03f48c1386e598f6eb5e7aa9c" id=3f4966de-1b87-4d31-bfde-e2e57ced1201 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.222302254Z" level=info msg="Stopped pod sandbox (already stopped): 7264f8fbd052450c753d9b8d91b923f9803fb2c03f48c1386e598f6eb5e7aa9c" id=3f4966de-1b87-4d31-bfde-e2e57ced1201 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.222638792Z" level=info msg="Removing pod sandbox: 7264f8fbd052450c753d9b8d91b923f9803fb2c03f48c1386e598f6eb5e7aa9c" id=77191317-2fd6-46c4-a346-2187358108b3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:07:56 functional-787197 crio[3517]: time="2025-10-17T20:07:56.226089048Z" level=info msg="Removed pod sandbox: 7264f8fbd052450c753d9b8d91b923f9803fb2c03f48c1386e598f6eb5e7aa9c" id=77191317-2fd6-46c4-a346-2187358108b3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 17 20:08:08 functional-787197 crio[3517]: time="2025-10-17T20:08:08.093414195Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=10e8adf7-d478-4afa-9a43-0f4d47c99b0b name=/runtime.v1.ImageService/PullImage
	Oct 17 20:08:17 functional-787197 crio[3517]: time="2025-10-17T20:08:17.092750752Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=45e6de09-f7bc-496b-9417-6f20fed0930c name=/runtime.v1.ImageService/PullImage
	Oct 17 20:08:33 functional-787197 crio[3517]: time="2025-10-17T20:08:33.092521465Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=70835ee9-1e47-4a21-ad4d-b316670384f0 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:09:07 functional-787197 crio[3517]: time="2025-10-17T20:09:07.092372405Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=dcef08fd-36a9-4705-980d-0adcaac6b4d7 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:09:23 functional-787197 crio[3517]: time="2025-10-17T20:09:23.092582817Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=251db770-e413-4ac6-8ede-a28b52879e9e name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:41 functional-787197 crio[3517]: time="2025-10-17T20:10:41.092074523Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7c8b9a33-c0be-4ff5-bf34-a6aace80f1f2 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:10:57 functional-787197 crio[3517]: time="2025-10-17T20:10:57.092608055Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=206d2e3a-5b1a-483f-8c60-9ce769386dd9 name=/runtime.v1.ImageService/PullImage
	Oct 17 20:13:32 functional-787197 crio[3517]: time="2025-10-17T20:13:32.093388744Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=92c20df7-d533-4872-be0b-7ab7dfe9dd4f name=/runtime.v1.ImageService/PullImage
	Oct 17 20:13:41 functional-787197 crio[3517]: time="2025-10-17T20:13:41.092549749Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=afe9e494-25e8-4728-a032-f9e1388098f2 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5331fc9b98835       docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a   9 minutes ago       Running             myfrontend                0                   2673e521d915a       sp-pod                                      default
	cfdca208fae5b       docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0   10 minutes ago      Running             nginx                     0                   7a7bde11dabbe       nginx-svc                                   default
	bbd50981f088e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   7f29b60cd6c23       coredns-66bc5c9577-mj8cn                    kube-system
	5b13a583d2963       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   9e9922fcc23fc       kube-proxy-qj9ps                            kube-system
	6ec72d9df2876       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   9bb1d8511d71c       kindnet-k6gn8                               kube-system
	567847d2757c4       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       3                   da54b914078a1       storage-provisioner                         kube-system
	d15666f99a0e3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   2939e8d51523e       kube-apiserver-functional-787197            kube-system
	1181fdc5cf970       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   76f4e55273c67       kube-scheduler-functional-787197            kube-system
	f66dc93577a6f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   acd4397b66094       kube-controller-manager-functional-787197   kube-system
	82e38910e7b08       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   1889833acde4b       etcd-functional-787197                      kube-system
	12f933853061c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       2                   da54b914078a1       storage-provisioner                         kube-system
	6f20794ffbb2a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   acd4397b66094       kube-controller-manager-functional-787197   kube-system
	e7d85d3b3b7f7       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   76f4e55273c67       kube-scheduler-functional-787197            kube-system
	c770304c166d9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   1889833acde4b       etcd-functional-787197                      kube-system
	692df764ebaf1       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   7f29b60cd6c23       coredns-66bc5c9577-mj8cn                    kube-system
	9cece6f4f5345       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   9bb1d8511d71c       kindnet-k6gn8                               kube-system
	9748d2a593b56       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   9e9922fcc23fc       kube-proxy-qj9ps                            kube-system
	
	
	==> coredns [692df764ebaf13fff3a4a71db5e869cb81b663d5a0793c03a2e540eca2821f96] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33714 - 3000 "HINFO IN 2231024180007778496.1761882470535272842. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.01329649s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [bbd50981f088e8c7094eec6e2aa0ae269e79b6e931f77d87dd91a2c0e9b049e1] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52974 - 53929 "HINFO IN 7808851736177444733.1119585946854809501. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004190056s
	
	
	==> describe nodes <==
	Name:               functional-787197
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-787197
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=functional-787197
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_05_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:05:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-787197
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:17:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:17:21 +0000   Fri, 17 Oct 2025 20:05:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:17:21 +0000   Fri, 17 Oct 2025 20:05:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:17:21 +0000   Fri, 17 Oct 2025 20:05:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:17:21 +0000   Fri, 17 Oct 2025 20:05:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-787197
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a65ba5b7-2dd8-47a1-b490-9bfb70d9c5e9
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-252zg                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m44s
	  default                     hello-node-connect-7d85dfc575-c4vs6          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m52s
	  kube-system                 coredns-66bc5c9577-mj8cn                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-787197                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-k6gn8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-787197             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-787197    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-qj9ps                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-787197             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-787197 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-787197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-787197 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-787197 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-787197 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-787197 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-787197 event: Registered Node functional-787197 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-787197 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-787197 event: Registered Node functional-787197 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-787197 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-787197 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-787197 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-787197 event: Registered Node functional-787197 in Controller
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [82e38910e7b08a428685159d25fd25a27f8d1f9736d046f8c7f10c4d3532868b] <==
	{"level":"warn","ts":"2025-10-17T20:06:59.344259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.377343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.415404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.440429Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.455909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.478250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.499351Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.523078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.535052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.551957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.571688Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.593919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.609850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.633789Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.647159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.678122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.683963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.705329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.736435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.752050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.771027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:59.871175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40166","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:16:58.185188Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1135}
	{"level":"info","ts":"2025-10-17T20:16:58.207915Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1135,"took":"22.430575ms","hash":3153201078,"current-db-size-bytes":3264512,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1441792,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-10-17T20:16:58.207965Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3153201078,"revision":1135,"compact-revision":-1}
	
	
	==> etcd [c770304c166d9f7a84d2de0378b3ddccb122d84903855636f5887bd0fb334f54] <==
	{"level":"warn","ts":"2025-10-17T20:06:13.167950Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.186136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.247212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.271827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.303370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.321638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:06:13.387746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:06:40.274405Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T20:06:40.274470Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-787197","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-17T20:06:40.274575Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:06:40.440717Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:06:40.440807Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:06:40.440832Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-17T20:06:40.440864Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-17T20:06:40.440951Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:06:40.440990Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:06:40.441003Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:06:40.440970Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-17T20:06:40.441078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:06:40.441118Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:06:40.441156Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:06:40.444918Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-17T20:06:40.445009Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:06:40.445043Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-17T20:06:40.445050Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-787197","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 20:17:36 up  3:00,  0 user,  load average: 0.12, 0.34, 1.41
	Linux functional-787197 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6ec72d9df28767e585fe199274de8684ec315d2822fac900b356560774b26c32] <==
	I1017 20:15:31.728474       1 main.go:301] handling current node
	I1017 20:15:41.728622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:15:41.728663       1 main.go:301] handling current node
	I1017 20:15:51.727201       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:15:51.727336       1 main.go:301] handling current node
	I1017 20:16:01.727014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:01.727148       1 main.go:301] handling current node
	I1017 20:16:11.734212       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:11.734247       1 main.go:301] handling current node
	I1017 20:16:21.735769       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:21.735805       1 main.go:301] handling current node
	I1017 20:16:31.727599       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:31.727633       1 main.go:301] handling current node
	I1017 20:16:41.734261       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:41.734295       1 main.go:301] handling current node
	I1017 20:16:51.731272       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:16:51.731321       1 main.go:301] handling current node
	I1017 20:17:01.727370       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:17:01.727489       1 main.go:301] handling current node
	I1017 20:17:11.728659       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:17:11.728932       1 main.go:301] handling current node
	I1017 20:17:21.728036       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:17:21.728164       1 main.go:301] handling current node
	I1017 20:17:31.728377       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:17:31.728418       1 main.go:301] handling current node
	
	
	==> kindnet [9cece6f4f5345aa2aa26f5b67c16f5dab463958719024b2b58d29321f8684c26] <==
	I1017 20:06:10.522705       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:06:10.522931       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1017 20:06:10.523092       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:06:10.523122       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:06:10.523137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:06:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:06:10.805894       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:06:10.805969       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:06:10.806293       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:06:10.807049       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:06:10.839033       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:06:10.840034       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:06:10.840156       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 20:06:10.840248       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1017 20:06:14.907689       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:06:14.907758       1 metrics.go:72] Registering metrics
	I1017 20:06:14.907814       1 controller.go:711] "Syncing nftables rules"
	I1017 20:06:20.808205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:06:20.808263       1 main.go:301] handling current node
	I1017 20:06:30.805818       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:06:30.805879       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d15666f99a0e34e0c3d21bc9f869625810c91970cd8e82ff741314fcf87467d2] <==
	I1017 20:07:01.020268       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:07:01.020305       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:07:01.033558       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:07:01.033684       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:07:01.033716       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:07:01.033747       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:07:01.033984       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:07:01.034006       1 policy_source.go:240] refreshing policies
	I1017 20:07:01.040101       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 20:07:01.111965       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:07:01.704146       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:07:02.580857       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 20:07:02.705457       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:07:02.777239       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 20:07:02.785440       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 20:07:04.444909       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:07:04.595311       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:07:04.647589       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:07:19.510122       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.108.254.89"}
	I1017 20:07:25.685435       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.102.58.157"}
	I1017 20:07:34.434025       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.212.51"}
	E1017 20:07:43.446426       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:43720: use of closed network connection
	E1017 20:07:52.687264       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47466: use of closed network connection
	I1017 20:07:52.887682       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.58.53"}
	I1017 20:17:00.940809       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6f20794ffbb2ad9f7cd4f31f9c06d2b0724806510e4bac6230bca73c2b621719] <==
	I1017 20:06:18.248090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:06:18.248201       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:06:18.248264       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:06:18.254776       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:18.254798       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:06:18.254807       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:06:18.258210       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:06:18.259715       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:06:18.263544       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:06:18.266744       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:06:18.266849       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:06:18.266942       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-787197"
	I1017 20:06:18.266989       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:06:18.269207       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:06:18.277308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:06:18.285994       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:06:18.286024       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:06:18.286039       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:06:18.286051       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:06:18.286112       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:06:18.286262       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:06:18.286199       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:06:18.288238       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:06:18.289312       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:06:18.298127       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [f66dc93577a6f44b3356da039e9ec041c1dc0eaf81a3e21438a833e6b2cc6c8a] <==
	I1017 20:07:04.259251       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:07:04.260425       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:07:04.260467       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 20:07:04.263651       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:07:04.266145       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:07:04.267990       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 20:07:04.269303       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:07:04.270743       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:07:04.270928       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 20:07:04.270998       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:07:04.271079       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-787197"
	I1017 20:07:04.271150       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 20:07:04.274436       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:07:04.277610       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:07:04.278928       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 20:07:04.278976       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:07:04.283513       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:04.285777       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:07:04.285849       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:07:04.286930       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:07:04.288947       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:07:04.289061       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:07:04.289167       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 20:07:04.289213       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:07:04.293374       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	
	
	==> kube-proxy [5b13a583d2963021deeccbedcd0d90629f46b57291997afc672535720c5d87d4] <==
	I1017 20:07:01.606712       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:07:01.778813       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:07:01.879841       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:07:01.880047       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 20:07:01.880166       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:07:01.916285       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:07:01.916395       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:07:01.933398       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:07:01.933784       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:07:01.934058       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:01.935551       1 config.go:200] "Starting service config controller"
	I1017 20:07:01.935619       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:07:01.935664       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:07:01.935710       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:07:01.935747       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:07:01.935782       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:07:01.936562       1 config.go:309] "Starting node config controller"
	I1017 20:07:01.936633       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:07:01.936663       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:07:02.035720       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:07:02.035892       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:07:02.035910       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [9748d2a593b567df50863438cc8e63bcdf7e97b1d21c07cfd491b041a0017106] <==
	I1017 20:06:12.629698       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:06:14.056256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:06:14.973468       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:06:14.973521       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 20:06:14.973665       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:06:15.312567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:06:15.312627       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:06:15.319748       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:06:15.320038       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:06:15.328797       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:06:15.330102       1 config.go:200] "Starting service config controller"
	I1017 20:06:15.330114       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:06:15.330131       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:06:15.330135       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:06:15.330145       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:06:15.330149       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:06:15.330779       1 config.go:309] "Starting node config controller"
	I1017 20:06:15.330786       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:06:15.330792       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:06:15.431371       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:06:15.435168       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:06:15.435257       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1181fdc5cf970b1b700a46712e56ef36a608c8ce05735dd9d9bd73507fa63f8e] <==
	I1017 20:06:59.875408       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:07:01.622793       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:07:01.622936       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:07:01.635652       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:07:01.635939       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:07:01.635987       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:07:01.636044       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:07:01.640331       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:01.640419       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:01.640464       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:01.641829       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:07:01.736276       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:07:01.740664       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:07:01.742954       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [e7d85d3b3b7f734ea07b92ddfd5ee45eae92d48b4e4d38676d91c760d0dce2e5] <==
	I1017 20:06:15.491463       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:06:16.797924       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:06:16.797957       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:06:16.802929       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:06:16.803050       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:06:16.803150       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:16.803186       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:16.803237       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:16.803266       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:16.803395       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:06:16.803508       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:06:16.904224       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:06:16.904349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:16.905158       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:40.273655       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 20:06:40.273676       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 20:06:40.273699       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 20:06:40.273754       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:06:40.273779       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:06:40.273799       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1017 20:06:40.274023       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 20:06:40.274053       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 17 20:14:52 functional-787197 kubelet[3829]: E1017 20:14:52.092648    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:15:06 functional-787197 kubelet[3829]: E1017 20:15:06.093277    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:15:07 functional-787197 kubelet[3829]: E1017 20:15:07.092642    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:15:18 functional-787197 kubelet[3829]: E1017 20:15:18.094117    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:15:21 functional-787197 kubelet[3829]: E1017 20:15:21.092087    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:15:30 functional-787197 kubelet[3829]: E1017 20:15:30.092896    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:15:34 functional-787197 kubelet[3829]: E1017 20:15:34.092378    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:15:42 functional-787197 kubelet[3829]: E1017 20:15:42.094206    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:15:45 functional-787197 kubelet[3829]: E1017 20:15:45.091989    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:15:57 functional-787197 kubelet[3829]: E1017 20:15:57.092034    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:15:58 functional-787197 kubelet[3829]: E1017 20:15:58.092466    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:16:09 functional-787197 kubelet[3829]: E1017 20:16:09.091925    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:16:12 functional-787197 kubelet[3829]: E1017 20:16:12.092256    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:16:24 functional-787197 kubelet[3829]: E1017 20:16:24.091499    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:16:25 functional-787197 kubelet[3829]: E1017 20:16:25.091829    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:16:36 functional-787197 kubelet[3829]: E1017 20:16:36.092462    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:16:39 functional-787197 kubelet[3829]: E1017 20:16:39.091895    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:16:48 functional-787197 kubelet[3829]: E1017 20:16:48.092250    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:16:50 functional-787197 kubelet[3829]: E1017 20:16:50.092178    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:17:00 functional-787197 kubelet[3829]: E1017 20:17:00.093739    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:17:02 functional-787197 kubelet[3829]: E1017 20:17:02.092731    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:17:13 functional-787197 kubelet[3829]: E1017 20:17:13.091764    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:17:13 functional-787197 kubelet[3829]: E1017 20:17:13.091856    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	Oct 17 20:17:24 functional-787197 kubelet[3829]: E1017 20:17:24.092491    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-252zg" podUID="e2eac087-f5dc-490c-948d-e0f751ec1108"
	Oct 17 20:17:25 functional-787197 kubelet[3829]: E1017 20:17:25.092422    3829 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-c4vs6" podUID="472cb1f6-2116-4a8a-a2b1-ffcdbff530d6"
	
	
	==> storage-provisioner [12f933853061ca2dadad7fc9f7def334451ad1550aa0030801c98f534bcc2615] <==
	I1017 20:06:22.177844       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 20:06:22.191238       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 20:06:22.191361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 20:06:22.194015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:25.649536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:29.910339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:33.508371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:36.562903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:39.585363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:39.592187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:06:39.592358       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 20:06:39.592518       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-787197_f6d0041f-90bb-43f0-9363-9902115128ef!
	I1017 20:06:39.593440       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a66ce8cd-67a4-46ab-9c77-19361fab8149", APIVersion:"v1", ResourceVersion:"571", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-787197_f6d0041f-90bb-43f0-9363-9902115128ef became leader
	W1017 20:06:39.606288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:06:39.614267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 20:06:39.699172       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-787197_f6d0041f-90bb-43f0-9363-9902115128ef!
	
	
	==> storage-provisioner [567847d2757c4693df0396afb7c68e712127862992d012e84d7453d632d7eb22] <==
	W1017 20:17:11.836893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:13.840506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:13.846678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:15.849836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:15.856665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:17.859840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:17.864262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:19.866950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:19.871209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:21.873882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:21.878409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:23.881608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:23.886065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:25.889182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:25.893441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:27.897162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:27.904027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:29.906574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:29.910800       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:31.913795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:31.918326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:33.921177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:33.925428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:35.929434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 20:17:35.935192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787197 -n functional-787197
helpers_test.go:269: (dbg) Run:  kubectl --context functional-787197 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-252zg hello-node-connect-7d85dfc575-c4vs6
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-787197 describe pod hello-node-75c85bcc94-252zg hello-node-connect-7d85dfc575-c4vs6
helpers_test.go:290: (dbg) kubectl --context functional-787197 describe pod hello-node-75c85bcc94-252zg hello-node-connect-7d85dfc575-c4vs6:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-252zg
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-787197/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 20:07:52 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7pvm8 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7pvm8:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m44s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-252zg to functional-787197
	  Normal   Pulling    6m40s (x5 over 9m44s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m40s (x5 over 9m44s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m40s (x5 over 9m44s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m41s (x20 over 9m44s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m30s (x21 over 9m44s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-c4vs6
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-787197/192.168.49.2
	Start Time:       Fri, 17 Oct 2025 20:07:34 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7lh4 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z7lh4:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-c4vs6 to functional-787197
	  Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-787197 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-787197 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-252zg" [e2eac087-f5dc-490c-948d-e0f751ec1108] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1017 20:08:06.188452  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:10:22.320919  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:10:50.030613  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:15:22.320931  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-787197 -n functional-787197
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-17 20:17:53.347657642 +0000 UTC m=+1241.193619081
functional_test.go:1460: (dbg) Run:  kubectl --context functional-787197 describe po hello-node-75c85bcc94-252zg -n default
functional_test.go:1460: (dbg) kubectl --context functional-787197 describe po hello-node-75c85bcc94-252zg -n default:
Name:             hello-node-75c85bcc94-252zg
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-787197/192.168.49.2
Start Time:       Fri, 17 Oct 2025 20:07:52 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7pvm8 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7pvm8:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-252zg to functional-787197
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-787197 logs hello-node-75c85bcc94-252zg -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-787197 logs hello-node-75c85bcc94-252zg -n default: exit status 1 (86.584273ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-252zg" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-787197 logs hello-node-75c85bcc94-252zg -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.87s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 service --namespace=default --https --url hello-node: exit status 115 (490.493245ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31895
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-787197 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 service hello-node --url --format={{.IP}}: exit status 115 (504.43372ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-787197 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 service hello-node --url: exit status 115 (464.824937ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31895
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-787197 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31895
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image load --daemon kicbase/echo-server:functional-787197 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787197" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image load --daemon kicbase/echo-server:functional-787197 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787197" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-787197
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image load --daemon kicbase/echo-server:functional-787197 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-787197" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image save kicbase/echo-server:functional-787197 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1017 20:18:07.674569  614548 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:18:07.681002  614548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:18:07.681125  614548 out.go:374] Setting ErrFile to fd 2...
	I1017 20:18:07.681153  614548 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:18:07.681568  614548 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:18:07.682956  614548 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:18:07.683176  614548 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:18:07.683692  614548 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
	I1017 20:18:07.700677  614548 ssh_runner.go:195] Run: systemctl --version
	I1017 20:18:07.700745  614548 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
	I1017 20:18:07.717854  614548 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
	I1017 20:18:07.825641  614548 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1017 20:18:07.825694  614548 cache_images.go:254] Failed to load cached images for "functional-787197": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1017 20:18:07.825720  614548 cache_images.go:266] failed pushing to: functional-787197

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-787197
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image save --daemon kicbase/echo-server:functional-787197 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-787197
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-787197: exit status 1 (20.496889ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-787197

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-787197

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (543.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 stop --alsologtostderr -v 5: (38.358947173s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 start --wait true --alsologtostderr -v 5
E1017 20:25:09.097203  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:25:22.321162  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:27:25.233573  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:27:52.939568  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:30:22.321433  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:32:25.233269  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-858120 start --wait true --alsologtostderr -v 5: exit status 80 (8m22.627154135s)

                                                
                                                
-- stdout --
	* [ha-858120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-858120" primary control-plane node in "ha-858120" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	* Enabled addons: 
	
	* Starting "ha-858120-m02" control-plane node in "ha-858120" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-858120-m03" control-plane node in "ha-858120" cluster
	* Pulling base image v0.0.48-1760609789-21757 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:24:49.626381  633180 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:24:49.626517  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626528  633180 out.go:374] Setting ErrFile to fd 2...
	I1017 20:24:49.626533  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626788  633180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:24:49.627220  633180 out.go:368] Setting JSON to false
	I1017 20:24:49.628041  633180 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11236,"bootTime":1760721454,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:24:49.628110  633180 start.go:141] virtualization:  
	I1017 20:24:49.633530  633180 out.go:179] * [ha-858120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:24:49.636591  633180 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:24:49.636670  633180 notify.go:220] Checking for updates...
	I1017 20:24:49.642574  633180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:24:49.645486  633180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:49.648436  633180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:24:49.651294  633180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:24:49.654188  633180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:24:49.657632  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:49.657777  633180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:24:49.688170  633180 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:24:49.688301  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.745303  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.735869738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.745414  633180 docker.go:318] overlay module found
	I1017 20:24:49.748552  633180 out.go:179] * Using the docker driver based on existing profile
	I1017 20:24:49.751497  633180 start.go:305] selected driver: docker
	I1017 20:24:49.751513  633180 start.go:925] validating driver "docker" against &{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.751702  633180 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:24:49.751804  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.806673  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.798122578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.807082  633180 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:24:49.807153  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:49.807223  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:49.807278  633180 start.go:349] cluster config:
	{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.810596  633180 out.go:179] * Starting "ha-858120" primary control-plane node in "ha-858120" cluster
	I1017 20:24:49.813288  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:49.816087  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:49.818802  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:49.818879  633180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:24:49.818889  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:49.818892  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:49.819084  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:49.819096  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:49.819258  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:49.838368  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:49.838387  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:49.838401  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:49.838423  633180 start.go:360] acquireMachinesLock for ha-858120: {Name:mk62278368bd1da921b0ccf6844a662f4fa595df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:49.838475  633180 start.go:364] duration metric: took 34.511µs to acquireMachinesLock for "ha-858120"
	I1017 20:24:49.838494  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:49.838499  633180 fix.go:54] fixHost starting: 
	I1017 20:24:49.838762  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:49.855336  633180 fix.go:112] recreateIfNeeded on ha-858120: state=Stopped err=<nil>
	W1017 20:24:49.855369  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:49.858630  633180 out.go:252] * Restarting existing docker container for "ha-858120" ...
	I1017 20:24:49.858710  633180 cli_runner.go:164] Run: docker start ha-858120
	I1017 20:24:50.114094  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:50.136057  633180 kic.go:430] container "ha-858120" state is running.
	I1017 20:24:50.136454  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:50.160255  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:50.160500  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:50.160583  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:50.184023  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:50.184342  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:50.184352  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:50.185019  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40052->127.0.0.1:33552: read: connection reset by peer
	I1017 20:24:53.330671  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.330705  633180 ubuntu.go:182] provisioning hostname "ha-858120"
	I1017 20:24:53.330778  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.348402  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.348733  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.348751  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120 && echo "ha-858120" | sudo tee /etc/hostname
	I1017 20:24:53.508835  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.508970  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.526510  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.526830  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.526846  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:24:53.671383  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:24:53.671409  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:24:53.671452  633180 ubuntu.go:190] setting up certificates
	I1017 20:24:53.671461  633180 provision.go:84] configureAuth start
	I1017 20:24:53.671530  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:53.689159  633180 provision.go:143] copyHostCerts
	I1017 20:24:53.689210  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689244  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:24:53.689256  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689334  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:24:53.689461  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689496  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:24:53.689506  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689536  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:24:53.689582  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689603  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:24:53.689611  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689635  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:24:53.689684  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120 san=[127.0.0.1 192.168.49.2 ha-858120 localhost minikube]
	I1017 20:24:54.151535  633180 provision.go:177] copyRemoteCerts
	I1017 20:24:54.151620  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:24:54.151667  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.170207  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.274864  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:24:54.274925  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:24:54.292724  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:24:54.292785  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 20:24:54.311391  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:24:54.311452  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:24:54.329407  633180 provision.go:87] duration metric: took 657.913595ms to configureAuth
	I1017 20:24:54.329435  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:24:54.329671  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:54.329775  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.347176  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:54.347484  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:54.347504  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:24:54.678767  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:24:54.678791  633180 machine.go:96] duration metric: took 4.518274151s to provisionDockerMachine
	I1017 20:24:54.678802  633180 start.go:293] postStartSetup for "ha-858120" (driver="docker")
	I1017 20:24:54.678813  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:24:54.678876  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:24:54.678922  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.699409  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.802879  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:24:54.806060  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:24:54.806088  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:24:54.806100  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:24:54.806152  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:24:54.806232  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:24:54.806239  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:24:54.806342  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:24:54.813547  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:54.830587  633180 start.go:296] duration metric: took 151.77042ms for postStartSetup
	I1017 20:24:54.830688  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:24:54.830734  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.847827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.948374  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:24:54.953275  633180 fix.go:56] duration metric: took 5.114768478s for fixHost
	I1017 20:24:54.953301  633180 start.go:83] releasing machines lock for "ha-858120", held for 5.114818193s
	I1017 20:24:54.953368  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:54.969761  633180 ssh_runner.go:195] Run: cat /version.json
	I1017 20:24:54.969816  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.970081  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:24:54.970130  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.994236  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.003341  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.198024  633180 ssh_runner.go:195] Run: systemctl --version
	I1017 20:24:55.204628  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:24:55.242919  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:24:55.247648  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:24:55.247728  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:24:55.255380  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:24:55.255403  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:24:55.255433  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:24:55.255479  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:24:55.270476  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:24:55.283296  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:24:55.283382  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:24:55.298839  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:24:55.311724  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:24:55.424434  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:24:55.537289  633180 docker.go:234] disabling docker service ...
	I1017 20:24:55.537361  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:24:55.553026  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:24:55.566351  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:24:55.681250  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:24:55.798405  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:24:55.811378  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:24:55.825585  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:24:55.825661  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.834063  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:24:55.834172  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.843151  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.851611  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.860130  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:24:55.867797  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.876324  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.884581  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.892952  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:24:55.900323  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:24:55.907965  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.021101  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:24:56.158831  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:24:56.158928  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:24:56.162776  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:24:56.162859  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:24:56.166390  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:24:56.192830  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:24:56.192972  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.221409  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.254422  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:24:56.257178  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:24:56.271792  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:24:56.275653  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.285727  633180 kubeadm.go:883] updating cluster {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:24:56.285880  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:56.285942  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.320941  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.320965  633180 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:24:56.321020  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.345716  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.345741  633180 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:24:56.345750  633180 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 20:24:56.345858  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:24:56.345940  633180 ssh_runner.go:195] Run: crio config
	I1017 20:24:56.409511  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:56.409542  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:56.409567  633180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:24:56.409589  633180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-858120 NodeName:ha-858120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:24:56.410072  633180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-858120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:24:56.410096  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:24:56.410163  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:24:56.425787  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:56.425947  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:24:56.426028  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:24:56.433575  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:24:56.433642  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 20:24:56.441456  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 20:24:56.453796  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:24:56.466376  633180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 20:24:56.480780  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:24:56.493351  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:24:56.497083  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.507006  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.614355  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:24:56.631138  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.2
	I1017 20:24:56.631170  633180 certs.go:195] generating shared ca certs ...
	I1017 20:24:56.631205  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:56.631352  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:24:56.631435  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:24:56.631448  633180 certs.go:257] generating profile certs ...
	I1017 20:24:56.631532  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:24:56.631567  633180 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f
	I1017 20:24:56.631581  633180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 20:24:57.260314  633180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f ...
	I1017 20:24:57.260390  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f: {Name:mk0eeb82ef1c3e333bd14f384361a665d81ea399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260624  633180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f ...
	I1017 20:24:57.260661  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f: {Name:mkd9170cb1ed384cce4c4204f35083d5972d0281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260803  633180 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt
	I1017 20:24:57.260987  633180 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key
	I1017 20:24:57.261179  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:24:57.261215  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:24:57.261249  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:24:57.261296  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:24:57.261335  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:24:57.261369  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:24:57.261415  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:24:57.261450  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:24:57.261591  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:24:57.261674  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:24:57.261740  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:24:57.261777  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:24:57.261824  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:24:57.261878  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:24:57.261950  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:24:57.262030  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:57.262099  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.262148  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.262186  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.262769  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:24:57.292641  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:24:57.324994  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:24:57.350011  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:24:57.393934  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:24:57.425087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:24:57.476207  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:24:57.521477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:24:57.553659  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:24:57.581891  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:24:57.616931  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:24:57.653395  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:24:57.676685  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:24:57.687849  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:24:57.697063  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701415  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701527  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.748713  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:24:57.761692  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:24:57.778101  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782605  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782719  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.851750  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:24:57.860250  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:24:57.872947  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877259  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877426  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.935424  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:24:57.948490  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:24:57.952867  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:24:58.010016  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:24:58.063976  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:24:58.108039  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:24:58.150227  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:24:58.194750  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:24:58.245919  633180 kubeadm.go:400] StartCluster: {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:58.246100  633180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:24:58.246199  633180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:24:58.291268  633180 cri.go:89] found id: "ee8a159707f901bec7d65f64a977c75fa75282a553082688f13964bab6bed5f2"
	I1017 20:24:58.291334  633180 cri.go:89] found id: "62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1"
	I1017 20:24:58.291353  633180 cri.go:89] found id: "09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b"
	I1017 20:24:58.291371  633180 cri.go:89] found id: "56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	I1017 20:24:58.291391  633180 cri.go:89] found id: "7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6"
	I1017 20:24:58.291421  633180 cri.go:89] found id: ""
	I1017 20:24:58.291493  633180 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:24:58.311475  633180 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:24:58Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:24:58.311623  633180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:24:58.320631  633180 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:24:58.320702  633180 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:24:58.320786  633180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:24:58.333311  633180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:58.333829  633180 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-858120" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.333984  633180 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "ha-858120" cluster setting kubeconfig missing "ha-858120" context setting]
	I1017 20:24:58.334333  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.334925  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:24:58.335797  633180 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:24:58.335856  633180 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 20:24:58.335916  633180 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:24:58.335942  633180 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:24:58.335963  633180 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:24:58.335987  633180 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:24:58.336351  633180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:24:58.349523  633180 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 20:24:58.349592  633180 kubeadm.go:601] duration metric: took 28.869563ms to restartPrimaryControlPlane
	I1017 20:24:58.349615  633180 kubeadm.go:402] duration metric: took 103.705091ms to StartCluster
	I1017 20:24:58.349647  633180 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.349744  633180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.350418  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.350679  633180 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:24:58.350724  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:24:58.350749  633180 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:24:58.351348  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.356477  633180 out.go:179] * Enabled addons: 
	I1017 20:24:58.359610  633180 addons.go:514] duration metric: took 8.847324ms for enable addons: enabled=[]
	I1017 20:24:58.359682  633180 start.go:246] waiting for cluster config update ...
	I1017 20:24:58.359707  633180 start.go:255] writing updated cluster config ...
	I1017 20:24:58.363052  633180 out.go:203] 
	I1017 20:24:58.366186  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.366342  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.369685  633180 out.go:179] * Starting "ha-858120-m02" control-plane node in "ha-858120" cluster
	I1017 20:24:58.372589  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:58.375487  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:58.378319  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:58.378348  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:58.378444  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:58.378455  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:58.378576  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.378776  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:58.404390  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:58.404414  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:58.404426  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:58.404451  633180 start.go:360] acquireMachinesLock for ha-858120-m02: {Name:mk29f876727465da439698dbf4948f688d19b698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:58.404504  633180 start.go:364] duration metric: took 36.981µs to acquireMachinesLock for "ha-858120-m02"
	I1017 20:24:58.404523  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:58.404529  633180 fix.go:54] fixHost starting: m02
	I1017 20:24:58.404783  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.432805  633180 fix.go:112] recreateIfNeeded on ha-858120-m02: state=Stopped err=<nil>
	W1017 20:24:58.432831  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:58.436247  633180 out.go:252] * Restarting existing docker container for "ha-858120-m02" ...
	I1017 20:24:58.436330  633180 cli_runner.go:164] Run: docker start ha-858120-m02
	I1017 20:24:58.871041  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.895697  633180 kic.go:430] container "ha-858120-m02" state is running.
	I1017 20:24:58.896208  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:24:58.931596  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.931856  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:58.931915  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:24:58.966121  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:58.966428  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:24:58.966438  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:58.967202  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57724->127.0.0.1:33557: read: connection reset by peer
	I1017 20:25:02.146984  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.147066  633180 ubuntu.go:182] provisioning hostname "ha-858120-m02"
	I1017 20:25:02.147179  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.180883  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.181193  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.181204  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m02 && echo "ha-858120-m02" | sudo tee /etc/hostname
	I1017 20:25:02.371014  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.371118  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.406904  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.407240  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.407264  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:02.593559  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:02.593637  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:02.593669  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:02.593708  633180 provision.go:84] configureAuth start
	I1017 20:25:02.593805  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:02.623320  633180 provision.go:143] copyHostCerts
	I1017 20:25:02.623365  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623400  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:02.623407  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623486  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:02.623563  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623580  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:02.623584  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623609  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:02.623646  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623662  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:02.623666  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623694  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:02.623738  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m02 san=[127.0.0.1 192.168.49.3 ha-858120-m02 localhost minikube]
	I1017 20:25:02.747705  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:02.747782  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:02.747828  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.766757  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:02.880520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:02.880580  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:02.906371  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:02.906496  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:02.945019  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:02.945087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:25:02.987301  633180 provision.go:87] duration metric: took 393.559503ms to configureAuth
	I1017 20:25:02.987344  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:02.987585  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:02.987711  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.018499  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:03.018813  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:03.018831  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:03.435808  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:03.435834  633180 machine.go:96] duration metric: took 4.503969223s to provisionDockerMachine
	I1017 20:25:03.435844  633180 start.go:293] postStartSetup for "ha-858120-m02" (driver="docker")
	I1017 20:25:03.435855  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:03.435916  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:03.435964  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.455906  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.562871  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:03.566432  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:03.566502  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:03.566518  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:03.566584  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:03.566666  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:03.566676  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:03.566778  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:03.574445  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:03.599633  633180 start.go:296] duration metric: took 163.773711ms for postStartSetup
	I1017 20:25:03.599729  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:03.599785  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.627245  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.741852  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:03.758675  633180 fix.go:56] duration metric: took 5.354138506s for fixHost
	I1017 20:25:03.758698  633180 start.go:83] releasing machines lock for "ha-858120-m02", held for 5.354185538s
	I1017 20:25:03.758773  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:03.786714  633180 out.go:179] * Found network options:
	I1017 20:25:03.789819  633180 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 20:25:03.793065  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:03.793118  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:03.793187  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:03.793246  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.793459  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:03.793525  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.843024  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.846827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:04.116601  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:04.182522  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:04.182658  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:04.199347  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:04.199411  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:04.199459  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:04.199536  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:04.224421  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:04.246523  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:04.246695  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:04.274907  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:04.293080  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:04.507388  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:04.744373  633180 docker.go:234] disabling docker service ...
	I1017 20:25:04.744489  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:04.763912  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:04.778471  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:04.999181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:05.212501  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:05.227293  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:05.243392  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:05.243504  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.253121  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:05.253268  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.262917  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.272790  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.282153  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:05.291008  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.300670  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.310655  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.320320  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:05.328861  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:05.337217  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:05.542704  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:25:05.766295  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:25:05.766406  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:25:05.770528  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:25:05.770594  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:25:05.774319  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:25:05.802224  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:25:05.802316  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.832543  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.868559  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:25:05.871619  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:25:05.874677  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:25:05.891324  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:25:05.895481  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:05.906398  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:25:05.906643  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:05.906915  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:25:05.924891  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:25:05.925180  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.3
	I1017 20:25:05.925188  633180 certs.go:195] generating shared ca certs ...
	I1017 20:25:05.925202  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:25:05.925333  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:25:05.925371  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:25:05.925378  633180 certs.go:257] generating profile certs ...
	I1017 20:25:05.925461  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:25:05.925516  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.75ce5734
	I1017 20:25:05.925554  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:25:05.925562  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:25:05.925574  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:25:05.925587  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:25:05.925602  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:25:05.925612  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:25:05.925624  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:25:05.925635  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:25:05.925645  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:25:05.925695  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:25:05.925722  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:25:05.925731  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:25:05.925756  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:25:05.925779  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:25:05.925801  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:25:05.925843  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:05.925869  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:05.925885  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:25:05.925895  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:25:05.925947  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:25:05.942775  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:25:06.039567  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:25:06.043552  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:25:06.051886  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:25:06.055650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:25:06.071273  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:25:06.074980  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:25:06.084033  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:25:06.087747  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:25:06.095897  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:25:06.099650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:25:06.109034  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:25:06.112875  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:25:06.121486  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:25:06.140459  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:25:06.159242  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:25:06.177880  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:25:06.196379  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:25:06.214366  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:25:06.232392  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:25:06.250082  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:25:06.268477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:25:06.287023  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:25:06.306305  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:25:06.325727  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:25:06.339132  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:25:06.351861  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:25:06.364957  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:25:06.378148  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:25:06.391750  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:25:06.405157  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:25:06.418865  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:25:06.425313  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:25:06.433695  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437626  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437740  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.479551  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:25:06.487333  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:25:06.495467  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.498961  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.499069  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.541081  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:25:06.549258  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:25:06.557861  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.561976  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.562057  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.604418  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:25:06.612470  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:25:06.616274  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:25:06.657319  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:25:06.701813  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:25:06.745127  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:25:06.787373  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:25:06.830322  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:25:06.871900  633180 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 20:25:06.872035  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:25:06.872065  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:25:06.872127  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:25:06.885270  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:25:06.885337  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:25:06.885400  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:25:06.893245  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:25:06.893321  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:25:06.901109  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:25:06.914333  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:25:06.927147  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:25:06.941387  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:25:06.945076  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:06.954881  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.078941  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.093624  633180 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:25:07.094028  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:07.097836  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:25:07.100837  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.224505  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.238770  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:25:07.238907  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:25:07.239230  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m02" to be "Ready" ...
	W1017 20:25:17.242440  633180 node_ready.go:55] error getting node "ha-858120-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02": net/http: TLS handshake timeout
	I1017 20:25:20.808419  633180 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02"
	I1017 20:25:27.596126  633180 node_ready.go:49] node "ha-858120-m02" is "Ready"
	I1017 20:25:27.596154  633180 node_ready.go:38] duration metric: took 20.356898962s for node "ha-858120-m02" to be "Ready" ...
	I1017 20:25:27.596166  633180 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:25:27.596229  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.096580  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.597221  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.097036  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.596474  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.096742  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.596355  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.621450  633180 api_server.go:72] duration metric: took 23.527778082s to wait for apiserver process to appear ...
	I1017 20:25:30.621472  633180 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:25:30.621491  633180 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 20:25:30.643810  633180 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 20:25:30.645148  633180 api_server.go:141] control plane version: v1.34.1
	I1017 20:25:30.645172  633180 api_server.go:131] duration metric: took 23.693241ms to wait for apiserver health ...
	I1017 20:25:30.645181  633180 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:25:30.668363  633180 system_pods.go:59] 26 kube-system pods found
	I1017 20:25:30.668458  633180 system_pods.go:61] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668491  633180 system_pods.go:61] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668536  633180 system_pods.go:61] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.668570  633180 system_pods.go:61] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.668614  633180 system_pods.go:61] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.668638  633180 system_pods.go:61] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.668658  633180 system_pods.go:61] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.668690  633180 system_pods.go:61] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.668714  633180 system_pods.go:61] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.668741  633180 system_pods.go:61] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.668778  633180 system_pods.go:61] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.668811  633180 system_pods.go:61] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.668837  633180 system_pods.go:61] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.668879  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.668901  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.668934  633180 system_pods.go:61] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.668958  633180 system_pods.go:61] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.668978  633180 system_pods.go:61] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.669017  633180 system_pods.go:61] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.669041  633180 system_pods.go:61] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.669067  633180 system_pods.go:61] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.669101  633180 system_pods.go:61] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.669127  633180 system_pods.go:61] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.669157  633180 system_pods.go:61] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.669188  633180 system_pods.go:61] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.669214  633180 system_pods.go:61] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.669236  633180 system_pods.go:74] duration metric: took 24.048955ms to wait for pod list to return data ...
	I1017 20:25:30.669273  633180 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:25:30.687415  633180 default_sa.go:45] found service account: "default"
	I1017 20:25:30.687489  633180 default_sa.go:55] duration metric: took 18.191795ms for default service account to be created ...
	I1017 20:25:30.687514  633180 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:25:30.762042  633180 system_pods.go:86] 26 kube-system pods found
	I1017 20:25:30.762148  633180 system_pods.go:89] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762181  633180 system_pods.go:89] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762224  633180 system_pods.go:89] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.762250  633180 system_pods.go:89] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.762273  633180 system_pods.go:89] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.762307  633180 system_pods.go:89] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.762330  633180 system_pods.go:89] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.762352  633180 system_pods.go:89] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.762387  633180 system_pods.go:89] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.762413  633180 system_pods.go:89] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.762435  633180 system_pods.go:89] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.762469  633180 system_pods.go:89] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.762497  633180 system_pods.go:89] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.762517  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.762554  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.762578  633180 system_pods.go:89] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.762599  633180 system_pods.go:89] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.762635  633180 system_pods.go:89] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.762662  633180 system_pods.go:89] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.762684  633180 system_pods.go:89] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.762717  633180 system_pods.go:89] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.762741  633180 system_pods.go:89] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.762760  633180 system_pods.go:89] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.762794  633180 system_pods.go:89] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.762816  633180 system_pods.go:89] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.762834  633180 system_pods.go:89] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.762855  633180 system_pods.go:126] duration metric: took 75.322066ms to wait for k8s-apps to be running ...
	I1017 20:25:30.762895  633180 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:25:30.762983  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:25:30.798874  633180 system_svc.go:56] duration metric: took 35.957427ms WaitForService to wait for kubelet
	I1017 20:25:30.798951  633180 kubeadm.go:586] duration metric: took 23.705274367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:25:30.798985  633180 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:25:30.805472  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805553  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805580  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805600  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805635  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805661  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805684  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805722  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805746  633180 node_conditions.go:105] duration metric: took 6.741948ms to run NodePressure ...
	I1017 20:25:30.805773  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:25:30.805824  633180 start.go:255] writing updated cluster config ...
	I1017 20:25:30.809328  633180 out.go:203] 
	I1017 20:25:30.812477  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:30.812660  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.816059  633180 out.go:179] * Starting "ha-858120-m03" control-plane node in "ha-858120" cluster
	I1017 20:25:30.819758  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:25:30.822780  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:25:30.825590  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:25:30.825654  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:25:30.825902  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:25:30.826027  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:25:30.826092  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:25:30.826241  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.865897  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:25:30.865917  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:25:30.865932  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:25:30.865956  633180 start.go:360] acquireMachinesLock for ha-858120-m03: {Name:mk0745e738c38fcaad2c00b3d5938ec5b18bc19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:25:30.866008  633180 start.go:364] duration metric: took 36.481µs to acquireMachinesLock for "ha-858120-m03"
	I1017 20:25:30.866027  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:25:30.866033  633180 fix.go:54] fixHost starting: m03
	I1017 20:25:30.866284  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:30.902472  633180 fix.go:112] recreateIfNeeded on ha-858120-m03: state=Stopped err=<nil>
	W1017 20:25:30.902498  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:25:30.906012  633180 out.go:252] * Restarting existing docker container for "ha-858120-m03" ...
	I1017 20:25:30.906100  633180 cli_runner.go:164] Run: docker start ha-858120-m03
	I1017 20:25:31.385666  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:31.416798  633180 kic.go:430] container "ha-858120-m03" state is running.
	I1017 20:25:31.417186  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:31.445988  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:31.446246  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:25:31.446327  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:31.476234  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:31.476543  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:31.476558  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:25:31.477171  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:25:34.759062  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:34.759091  633180 ubuntu.go:182] provisioning hostname "ha-858120-m03"
	I1017 20:25:34.759181  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:34.785061  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:34.785366  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:34.785384  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m03 && echo "ha-858120-m03" | sudo tee /etc/hostname
	I1017 20:25:35.026879  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:35.027037  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.055472  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:35.055775  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:35.055791  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:35.277230  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:35.277256  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:35.277273  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:35.277283  633180 provision.go:84] configureAuth start
	I1017 20:25:35.277348  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:35.311355  633180 provision.go:143] copyHostCerts
	I1017 20:25:35.311397  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311430  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:35.311438  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311519  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:35.311605  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311621  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:35.311626  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311652  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:35.311691  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311709  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:35.311713  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311737  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:35.311782  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m03 san=[127.0.0.1 192.168.49.4 ha-858120-m03 localhost minikube]
	I1017 20:25:35.867211  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:35.867305  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:35.867370  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.885861  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:36.014744  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:36.014818  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:36.078628  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:36.078695  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:36.159581  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:36.159683  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:25:36.221533  633180 provision.go:87] duration metric: took 944.235432ms to configureAuth
	I1017 20:25:36.221570  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:36.221864  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:36.222030  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:36.252315  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:36.252618  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:36.252633  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:37.901354  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:37.901379  633180 machine.go:96] duration metric: took 6.455113021s to provisionDockerMachine
	I1017 20:25:37.901397  633180 start.go:293] postStartSetup for "ha-858120-m03" (driver="docker")
	I1017 20:25:37.901423  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:37.901507  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:37.901580  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:37.931348  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.063033  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:38.067834  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:38.067869  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:38.067882  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:38.067943  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:38.068028  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:38.068035  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:38.068144  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:38.080413  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:38.111750  633180 start.go:296] duration metric: took 210.321276ms for postStartSetup
	I1017 20:25:38.111848  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:38.111903  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.139479  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.252206  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:38.257552  633180 fix.go:56] duration metric: took 7.391512723s for fixHost
	I1017 20:25:38.257574  633180 start.go:83] releasing machines lock for "ha-858120-m03", held for 7.39155818s
	I1017 20:25:38.257643  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:38.279335  633180 out.go:179] * Found network options:
	I1017 20:25:38.282289  633180 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 20:25:38.285193  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285225  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285250  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285261  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:38.285342  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:38.285383  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.285405  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:38.285456  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.309400  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.319419  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.495206  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:38.620333  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:38.620409  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:38.635710  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:38.635735  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:38.635766  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:38.635815  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:38.658258  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:38.677709  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:38.677780  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:38.695381  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:38.718728  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:38.983870  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:39.232982  633180 docker.go:234] disabling docker service ...
	I1017 20:25:39.233056  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:39.251900  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:39.268736  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:39.513181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:39.774360  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:39.795448  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:39.819737  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:39.819803  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.835507  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:39.835578  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.848330  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.863809  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.873655  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:39.886248  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.899031  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.910745  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.923167  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:39.945269  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:39.956015  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:40.185598  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:27:10.577185  633180 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.391479788s)
	I1017 20:27:10.577210  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:27:10.577270  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:27:10.581599  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:27:10.581663  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:27:10.586217  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:27:10.618110  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:27:10.618197  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.657726  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.690017  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:27:10.692996  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:27:10.695853  633180 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 20:27:10.698743  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:27:10.717568  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:27:10.721686  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:10.732598  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:27:10.732855  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:10.733110  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:27:10.755756  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:27:10.756043  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.4
	I1017 20:27:10.756057  633180 certs.go:195] generating shared ca certs ...
	I1017 20:27:10.756073  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:27:10.756206  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:27:10.756249  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:27:10.756259  633180 certs.go:257] generating profile certs ...
	I1017 20:27:10.756334  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:27:10.756400  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.daaf2b71
	I1017 20:27:10.756443  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:27:10.756456  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:27:10.756468  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:27:10.756484  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:27:10.756494  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:27:10.756505  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:27:10.756520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:27:10.756531  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:27:10.756545  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:27:10.756595  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:27:10.756627  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:27:10.756639  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:27:10.756664  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:27:10.756689  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:27:10.756714  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:27:10.756760  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:27:10.756791  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:10.756807  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:27:10.756818  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:27:10.756875  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:27:10.776286  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:27:10.875440  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:27:10.879271  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:27:10.887346  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:27:10.890991  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:27:10.899445  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:27:10.902677  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:27:10.910747  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:27:10.914609  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:27:10.923275  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:27:10.927331  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:27:10.937614  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:27:10.941051  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:27:10.949375  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:27:10.970388  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:27:10.989978  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:27:11.024313  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:27:11.045252  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:27:11.067969  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:27:11.093977  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:27:11.116400  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:27:11.143991  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:27:11.165234  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:27:11.186154  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:27:11.204999  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:27:11.217584  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:27:11.231184  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:27:11.245544  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:27:11.258825  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:27:11.273380  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:27:11.288154  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:27:11.301714  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:27:11.307871  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:27:11.316139  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320071  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320164  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.360582  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:27:11.368911  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:27:11.386044  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389821  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389916  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.431364  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:27:11.439391  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:27:11.448231  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452172  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452235  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.493408  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:27:11.501304  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:27:11.505093  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:27:11.546404  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:27:11.588587  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:27:11.629385  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:27:11.670643  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:27:11.711584  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:27:11.752896  633180 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 20:27:11.752991  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:27:11.753019  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:27:11.753080  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:27:11.765738  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:27:11.765801  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:27:11.765864  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:27:11.773834  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:27:11.773902  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:27:11.782020  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:27:11.794989  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:27:11.809996  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:27:11.825247  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:27:11.828873  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:11.838796  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:11.986822  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.004552  633180 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:27:12.005009  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:12.009913  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:27:12.012573  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:12.166504  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.181240  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:27:12.181372  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:27:12.181669  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m03" to be "Ready" ...
	W1017 20:27:14.185938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:16.186949  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:18.685673  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:20.686393  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:23.185742  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:25.186041  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:27.686171  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:30.186140  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:32.685938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:34.686362  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:37.189099  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:39.685178  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:41.685898  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:43.686246  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:46.185981  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:48.186022  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:50.685565  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:53.185024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:55.185063  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:57.186756  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:59.685967  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:02.185450  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:04.685930  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:07.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:09.185945  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:11.685298  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:13.685825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:16.186173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:18.685675  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:21.185822  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:23.686024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:25.686653  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:27.688976  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:30.185995  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:32.685998  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:34.686062  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:37.185512  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:39.684946  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:41.685173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:43.685392  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:45.686411  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:48.185559  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:50.685010  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:52.685699  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:54.685799  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:57.185287  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:59.185541  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:01.186445  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:03.685663  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:05.686118  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:08.185421  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:10.185464  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:12.685166  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:14.685776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:16.686147  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:18.686284  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:21.185551  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:23.685297  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:26.185709  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:28.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:30.186229  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:32.685640  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:34.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:36.685906  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:39.185156  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:41.185196  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:43.185432  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:45.189065  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:47.685980  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:50.185249  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:52.186422  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:54.685912  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:57.185530  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:59.185859  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:01.187381  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:03.685399  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:05.685481  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:08.187943  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:10.689106  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:13.185786  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:15.685607  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:17.686048  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:19.686753  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:22.185049  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:24.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:26.685608  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:28.686143  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:31.185273  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:33.186568  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:35.685304  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:37.685459  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:39.685964  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:42.186035  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:44.186982  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:46.685781  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:49.185082  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:51.185419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:53.686212  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:56.185582  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:58.185659  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:00.222492  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:02.685725  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:04.686504  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:07.186161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:09.685238  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:11.685865  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:14.185500  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:16.185620  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:18.192262  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:20.686051  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:23.185373  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:25.686121  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:28.187578  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:30.689269  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:33.185825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:35.686100  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:38.186012  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:40.685515  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:42.685703  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:44.685871  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:47.185764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:49.685433  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:51.685733  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:54.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:56.685619  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:59.185113  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:01.185211  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:03.185561  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:05.186288  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:07.685440  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:09.685758  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:12.185776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:14.185887  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:16.685436  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:19.185337  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:21.686419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:24.186002  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:26.686017  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:29.185789  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:31.686359  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:34.185117  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:36.185746  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:38.185848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:40.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:43.185924  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:45.186873  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:47.685424  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:49.685760  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:52.185842  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:54.685648  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:57.185264  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:59.185532  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:01.186342  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:03.685323  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:05.685848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:07.686600  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:10.185305  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	I1017 20:33:12.181809  633180 node_ready.go:38] duration metric: took 6m0.000088857s for node "ha-858120-m03" to be "Ready" ...
	I1017 20:33:12.184950  633180 out.go:203] 
	W1017 20:33:12.187811  633180 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1017 20:33:12.187835  633180 out.go:285] * 
	* 
	W1017 20:33:12.189989  633180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:33:12.193072  633180 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-858120 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node list --alsologtostderr -v 5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-858120
helpers_test.go:243: (dbg) docker inspect ha-858120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	        "Created": "2025-10-17T20:18:20.77215583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:24:49.89013736Z",
	            "FinishedAt": "2025-10-17T20:24:49.310249081Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hostname",
	        "HostsPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hosts",
	        "LogPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196-json.log",
	        "Name": "/ha-858120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-858120:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-858120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	                "LowerDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-858120",
	                "Source": "/var/lib/docker/volumes/ha-858120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-858120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-858120",
	                "name.minikube.sigs.k8s.io": "ha-858120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30368a165299690b2c1e64ba7fbd000063595e2b8330a6a0386fe8ae84472e14",
	            "SandboxKey": "/var/run/docker/netns/30368a165299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-858120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:7a:f4:71:ea:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a78c784685bd8e7296863536d4a6677a78ffb6c83e55d8ef3ae48685090ce7d1",
	                    "EndpointID": "2f35a70319780f77f7bb419c5c8b2a8ea449f45b75f1d2c0d0564b394c3bec61",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-858120",
	                        "0886947eb334"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-858120 -n ha-858120
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 logs -n 25: (1.51518828s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp testdata/cp-test.txt ha-858120-m04:/home/docker/cp-test.txt                                                             │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m04.txt │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m04_ha-858120.txt                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120.txt                                                 │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node start m02 --alsologtostderr -v 5                                                                                      │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:24 UTC │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ stop    │ ha-858120 stop --alsologtostderr -v 5                                                                                                │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │ 17 Oct 25 20:24 UTC │
	│ start   │ ha-858120 start --wait true --alsologtostderr -v 5                                                                                   │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:33 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:24:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:24:49.626381  633180 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:24:49.626517  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626528  633180 out.go:374] Setting ErrFile to fd 2...
	I1017 20:24:49.626533  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626788  633180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:24:49.627220  633180 out.go:368] Setting JSON to false
	I1017 20:24:49.628041  633180 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11236,"bootTime":1760721454,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:24:49.628110  633180 start.go:141] virtualization:  
	I1017 20:24:49.633530  633180 out.go:179] * [ha-858120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:24:49.636591  633180 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:24:49.636670  633180 notify.go:220] Checking for updates...
	I1017 20:24:49.642574  633180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:24:49.645486  633180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:49.648436  633180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:24:49.651294  633180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:24:49.654188  633180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:24:49.657632  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:49.657777  633180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:24:49.688170  633180 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:24:49.688301  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.745303  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.735869738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.745414  633180 docker.go:318] overlay module found
	I1017 20:24:49.748552  633180 out.go:179] * Using the docker driver based on existing profile
	I1017 20:24:49.751497  633180 start.go:305] selected driver: docker
	I1017 20:24:49.751513  633180 start.go:925] validating driver "docker" against &{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.751702  633180 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:24:49.751804  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.806673  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.798122578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.807082  633180 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:24:49.807153  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:49.807223  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:49.807278  633180 start.go:349] cluster config:
	{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.810596  633180 out.go:179] * Starting "ha-858120" primary control-plane node in "ha-858120" cluster
	I1017 20:24:49.813288  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:49.816087  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:49.818802  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:49.818879  633180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:24:49.818889  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:49.818892  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:49.819084  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:49.819096  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:49.819258  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:49.838368  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:49.838387  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:49.838401  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:49.838423  633180 start.go:360] acquireMachinesLock for ha-858120: {Name:mk62278368bd1da921b0ccf6844a662f4fa595df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:49.838475  633180 start.go:364] duration metric: took 34.511µs to acquireMachinesLock for "ha-858120"
	I1017 20:24:49.838494  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:49.838499  633180 fix.go:54] fixHost starting: 
	I1017 20:24:49.838762  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:49.855336  633180 fix.go:112] recreateIfNeeded on ha-858120: state=Stopped err=<nil>
	W1017 20:24:49.855369  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:49.858630  633180 out.go:252] * Restarting existing docker container for "ha-858120" ...
	I1017 20:24:49.858710  633180 cli_runner.go:164] Run: docker start ha-858120
	I1017 20:24:50.114094  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:50.136057  633180 kic.go:430] container "ha-858120" state is running.
	I1017 20:24:50.136454  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:50.160255  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:50.160500  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:50.160583  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:50.184023  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:50.184342  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:50.184352  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:50.185019  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40052->127.0.0.1:33552: read: connection reset by peer
	I1017 20:24:53.330671  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.330705  633180 ubuntu.go:182] provisioning hostname "ha-858120"
	I1017 20:24:53.330778  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.348402  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.348733  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.348751  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120 && echo "ha-858120" | sudo tee /etc/hostname
	I1017 20:24:53.508835  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.508970  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.526510  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.526830  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.526846  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:24:53.671383  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:24:53.671409  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:24:53.671452  633180 ubuntu.go:190] setting up certificates
	I1017 20:24:53.671461  633180 provision.go:84] configureAuth start
	I1017 20:24:53.671530  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:53.689159  633180 provision.go:143] copyHostCerts
	I1017 20:24:53.689210  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689244  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:24:53.689256  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689334  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:24:53.689461  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689496  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:24:53.689506  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689536  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:24:53.689582  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689603  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:24:53.689611  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689635  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:24:53.689684  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120 san=[127.0.0.1 192.168.49.2 ha-858120 localhost minikube]
	I1017 20:24:54.151535  633180 provision.go:177] copyRemoteCerts
	I1017 20:24:54.151620  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:24:54.151667  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.170207  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.274864  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:24:54.274925  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:24:54.292724  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:24:54.292785  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 20:24:54.311391  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:24:54.311452  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:24:54.329407  633180 provision.go:87] duration metric: took 657.913595ms to configureAuth
	I1017 20:24:54.329435  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:24:54.329671  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:54.329775  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.347176  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:54.347484  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:54.347504  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:24:54.678767  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:24:54.678791  633180 machine.go:96] duration metric: took 4.518274151s to provisionDockerMachine
	I1017 20:24:54.678802  633180 start.go:293] postStartSetup for "ha-858120" (driver="docker")
	I1017 20:24:54.678813  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:24:54.678876  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:24:54.678922  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.699409  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.802879  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:24:54.806060  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:24:54.806088  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:24:54.806100  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:24:54.806152  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:24:54.806232  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:24:54.806239  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:24:54.806342  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:24:54.813547  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:54.830587  633180 start.go:296] duration metric: took 151.77042ms for postStartSetup
	I1017 20:24:54.830688  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:24:54.830734  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.847827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.948374  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:24:54.953275  633180 fix.go:56] duration metric: took 5.114768478s for fixHost
	I1017 20:24:54.953301  633180 start.go:83] releasing machines lock for "ha-858120", held for 5.114818193s
	I1017 20:24:54.953368  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:54.969761  633180 ssh_runner.go:195] Run: cat /version.json
	I1017 20:24:54.969816  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.970081  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:24:54.970130  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.994236  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.003341  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.198024  633180 ssh_runner.go:195] Run: systemctl --version
	I1017 20:24:55.204628  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:24:55.242919  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:24:55.247648  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:24:55.247728  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:24:55.255380  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:24:55.255403  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:24:55.255433  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:24:55.255479  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:24:55.270476  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:24:55.283296  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:24:55.283382  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:24:55.298839  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:24:55.311724  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:24:55.424434  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:24:55.537289  633180 docker.go:234] disabling docker service ...
	I1017 20:24:55.537361  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:24:55.553026  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:24:55.566351  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:24:55.681250  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:24:55.798405  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:24:55.811378  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:24:55.825585  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:24:55.825661  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.834063  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:24:55.834172  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.843151  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.851611  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.860130  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:24:55.867797  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.876324  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.884581  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.892952  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:24:55.900323  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:24:55.907965  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.021101  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:24:56.158831  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:24:56.158928  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:24:56.162776  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:24:56.162859  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:24:56.166390  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:24:56.192830  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:24:56.192972  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.221409  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.254422  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:24:56.257178  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:24:56.271792  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:24:56.275653  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.285727  633180 kubeadm.go:883] updating cluster {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:24:56.285880  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:56.285942  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.320941  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.320965  633180 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:24:56.321020  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.345716  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.345741  633180 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:24:56.345750  633180 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 20:24:56.345858  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:24:56.345940  633180 ssh_runner.go:195] Run: crio config
	I1017 20:24:56.409511  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:56.409542  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:56.409567  633180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:24:56.409589  633180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-858120 NodeName:ha-858120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:24:56.410072  633180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-858120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:24:56.410096  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:24:56.410163  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:24:56.425787  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:56.425947  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:24:56.426028  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:24:56.433575  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:24:56.433642  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 20:24:56.441456  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 20:24:56.453796  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:24:56.466376  633180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 20:24:56.480780  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:24:56.493351  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:24:56.497083  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.507006  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.614355  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:24:56.631138  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.2
	I1017 20:24:56.631170  633180 certs.go:195] generating shared ca certs ...
	I1017 20:24:56.631205  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:56.631352  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:24:56.631435  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:24:56.631448  633180 certs.go:257] generating profile certs ...
	I1017 20:24:56.631532  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:24:56.631567  633180 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f
	I1017 20:24:56.631581  633180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 20:24:57.260314  633180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f ...
	I1017 20:24:57.260390  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f: {Name:mk0eeb82ef1c3e333bd14f384361a665d81ea399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260624  633180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f ...
	I1017 20:24:57.260661  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f: {Name:mkd9170cb1ed384cce4c4204f35083d5972d0281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260803  633180 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt
	I1017 20:24:57.260987  633180 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key
	I1017 20:24:57.261179  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:24:57.261215  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:24:57.261249  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:24:57.261296  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:24:57.261335  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:24:57.261369  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:24:57.261415  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:24:57.261450  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:24:57.261591  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:24:57.261674  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:24:57.261740  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:24:57.261777  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:24:57.261824  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:24:57.261878  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:24:57.261950  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:24:57.262030  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:57.262099  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.262148  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.262186  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.262769  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:24:57.292641  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:24:57.324994  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:24:57.350011  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:24:57.393934  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:24:57.425087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:24:57.476207  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:24:57.521477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:24:57.553659  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:24:57.581891  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:24:57.616931  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:24:57.653395  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:24:57.676685  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:24:57.687849  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:24:57.697063  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701415  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701527  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.748713  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:24:57.761692  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:24:57.778101  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782605  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782719  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.851750  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:24:57.860250  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:24:57.872947  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877259  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877426  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.935424  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:24:57.948490  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:24:57.952867  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:24:58.010016  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:24:58.063976  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:24:58.108039  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:24:58.150227  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:24:58.194750  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:24:58.245919  633180 kubeadm.go:400] StartCluster: {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:58.246100  633180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:24:58.246199  633180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:24:58.291268  633180 cri.go:89] found id: "ee8a159707f901bec7d65f64a977c75fa75282a553082688f13964bab6bed5f2"
	I1017 20:24:58.291334  633180 cri.go:89] found id: "62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1"
	I1017 20:24:58.291353  633180 cri.go:89] found id: "09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b"
	I1017 20:24:58.291371  633180 cri.go:89] found id: "56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	I1017 20:24:58.291391  633180 cri.go:89] found id: "7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6"
	I1017 20:24:58.291421  633180 cri.go:89] found id: ""
	I1017 20:24:58.291493  633180 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:24:58.311475  633180 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:24:58Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:24:58.311623  633180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:24:58.320631  633180 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:24:58.320702  633180 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:24:58.320786  633180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:24:58.333311  633180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:58.333829  633180 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-858120" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.333984  633180 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "ha-858120" cluster setting kubeconfig missing "ha-858120" context setting]
	I1017 20:24:58.334333  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.334925  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:24:58.335797  633180 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:24:58.335856  633180 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 20:24:58.335916  633180 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:24:58.335942  633180 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:24:58.335963  633180 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:24:58.335987  633180 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:24:58.336351  633180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:24:58.349523  633180 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 20:24:58.349592  633180 kubeadm.go:601] duration metric: took 28.869563ms to restartPrimaryControlPlane
	I1017 20:24:58.349615  633180 kubeadm.go:402] duration metric: took 103.705091ms to StartCluster
	I1017 20:24:58.349647  633180 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.349744  633180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.350418  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.350679  633180 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:24:58.350724  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:24:58.350749  633180 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:24:58.351348  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.356477  633180 out.go:179] * Enabled addons: 
	I1017 20:24:58.359610  633180 addons.go:514] duration metric: took 8.847324ms for enable addons: enabled=[]
	I1017 20:24:58.359682  633180 start.go:246] waiting for cluster config update ...
	I1017 20:24:58.359707  633180 start.go:255] writing updated cluster config ...
	I1017 20:24:58.363052  633180 out.go:203] 
	I1017 20:24:58.366186  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.366342  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.369685  633180 out.go:179] * Starting "ha-858120-m02" control-plane node in "ha-858120" cluster
	I1017 20:24:58.372589  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:58.375487  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:58.378319  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:58.378348  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:58.378444  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:58.378455  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:58.378576  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.378776  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:58.404390  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:58.404414  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:58.404426  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:58.404451  633180 start.go:360] acquireMachinesLock for ha-858120-m02: {Name:mk29f876727465da439698dbf4948f688d19b698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:58.404504  633180 start.go:364] duration metric: took 36.981µs to acquireMachinesLock for "ha-858120-m02"
	I1017 20:24:58.404523  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:58.404529  633180 fix.go:54] fixHost starting: m02
	I1017 20:24:58.404783  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.432805  633180 fix.go:112] recreateIfNeeded on ha-858120-m02: state=Stopped err=<nil>
	W1017 20:24:58.432831  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:58.436247  633180 out.go:252] * Restarting existing docker container for "ha-858120-m02" ...
	I1017 20:24:58.436330  633180 cli_runner.go:164] Run: docker start ha-858120-m02
	I1017 20:24:58.871041  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.895697  633180 kic.go:430] container "ha-858120-m02" state is running.
	I1017 20:24:58.896208  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:24:58.931596  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.931856  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:58.931915  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:24:58.966121  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:58.966428  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:24:58.966438  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:58.967202  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57724->127.0.0.1:33557: read: connection reset by peer
	I1017 20:25:02.146984  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.147066  633180 ubuntu.go:182] provisioning hostname "ha-858120-m02"
	I1017 20:25:02.147179  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.180883  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.181193  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.181204  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m02 && echo "ha-858120-m02" | sudo tee /etc/hostname
	I1017 20:25:02.371014  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.371118  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.406904  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.407240  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.407264  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:02.593559  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:02.593637  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:02.593669  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:02.593708  633180 provision.go:84] configureAuth start
	I1017 20:25:02.593805  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:02.623320  633180 provision.go:143] copyHostCerts
	I1017 20:25:02.623365  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623400  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:02.623407  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623486  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:02.623563  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623580  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:02.623584  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623609  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:02.623646  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623662  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:02.623666  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623694  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:02.623738  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m02 san=[127.0.0.1 192.168.49.3 ha-858120-m02 localhost minikube]
	I1017 20:25:02.747705  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:02.747782  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:02.747828  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.766757  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:02.880520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:02.880580  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:02.906371  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:02.906496  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:02.945019  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:02.945087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:25:02.987301  633180 provision.go:87] duration metric: took 393.559503ms to configureAuth
	I1017 20:25:02.987344  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:02.987585  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:02.987711  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.018499  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:03.018813  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:03.018831  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:03.435808  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:03.435834  633180 machine.go:96] duration metric: took 4.503969223s to provisionDockerMachine
	I1017 20:25:03.435844  633180 start.go:293] postStartSetup for "ha-858120-m02" (driver="docker")
	I1017 20:25:03.435855  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:03.435916  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:03.435964  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.455906  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.562871  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:03.566432  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:03.566502  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:03.566518  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:03.566584  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:03.566666  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:03.566676  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:03.566778  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:03.574445  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:03.599633  633180 start.go:296] duration metric: took 163.773711ms for postStartSetup
	I1017 20:25:03.599729  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:03.599785  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.627245  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.741852  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:03.758675  633180 fix.go:56] duration metric: took 5.354138506s for fixHost
	I1017 20:25:03.758698  633180 start.go:83] releasing machines lock for "ha-858120-m02", held for 5.354185538s
	I1017 20:25:03.758773  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:03.786714  633180 out.go:179] * Found network options:
	I1017 20:25:03.789819  633180 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 20:25:03.793065  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:03.793118  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:03.793187  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:03.793246  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.793459  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:03.793525  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.843024  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.846827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:04.116601  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:04.182522  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:04.182658  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:04.199347  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:04.199411  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:04.199459  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:04.199536  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:04.224421  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:04.246523  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:04.246695  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:04.274907  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:04.293080  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:04.507388  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:04.744373  633180 docker.go:234] disabling docker service ...
	I1017 20:25:04.744489  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:04.763912  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:04.778471  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:04.999181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:05.212501  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:05.227293  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:05.243392  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:05.243504  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.253121  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:05.253268  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.262917  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.272790  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.282153  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:05.291008  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.300670  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.310655  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.320320  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:05.328861  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:05.337217  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:05.542704  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:25:05.766295  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:25:05.766406  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:25:05.770528  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:25:05.770594  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:25:05.774319  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:25:05.802224  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:25:05.802316  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.832543  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.868559  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:25:05.871619  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:25:05.874677  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:25:05.891324  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:25:05.895481  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:05.906398  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:25:05.906643  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:05.906915  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:25:05.924891  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:25:05.925180  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.3
	I1017 20:25:05.925188  633180 certs.go:195] generating shared ca certs ...
	I1017 20:25:05.925202  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:25:05.925333  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:25:05.925371  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:25:05.925378  633180 certs.go:257] generating profile certs ...
	I1017 20:25:05.925461  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:25:05.925516  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.75ce5734
	I1017 20:25:05.925554  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:25:05.925562  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:25:05.925574  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:25:05.925587  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:25:05.925602  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:25:05.925612  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:25:05.925624  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:25:05.925635  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:25:05.925645  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:25:05.925695  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:25:05.925722  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:25:05.925731  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:25:05.925756  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:25:05.925779  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:25:05.925801  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:25:05.925843  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:05.925869  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:05.925885  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:25:05.925895  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:25:05.925947  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:25:05.942775  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:25:06.039567  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:25:06.043552  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:25:06.051886  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:25:06.055650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:25:06.071273  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:25:06.074980  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:25:06.084033  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:25:06.087747  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:25:06.095897  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:25:06.099650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:25:06.109034  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:25:06.112875  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:25:06.121486  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:25:06.140459  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:25:06.159242  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:25:06.177880  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:25:06.196379  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:25:06.214366  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:25:06.232392  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:25:06.250082  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:25:06.268477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:25:06.287023  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:25:06.306305  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:25:06.325727  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:25:06.339132  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:25:06.351861  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:25:06.364957  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:25:06.378148  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:25:06.391750  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:25:06.405157  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:25:06.418865  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:25:06.425313  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:25:06.433695  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437626  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437740  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.479551  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:25:06.487333  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:25:06.495467  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.498961  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.499069  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.541081  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:25:06.549258  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:25:06.557861  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.561976  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.562057  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.604418  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:25:06.612470  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:25:06.616274  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:25:06.657319  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:25:06.701813  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:25:06.745127  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:25:06.787373  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:25:06.830322  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:25:06.871900  633180 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 20:25:06.872035  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:25:06.872065  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:25:06.872127  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:25:06.885270  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:25:06.885337  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:25:06.885400  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:25:06.893245  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:25:06.893321  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:25:06.901109  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:25:06.914333  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:25:06.927147  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:25:06.941387  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:25:06.945076  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:06.954881  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.078941  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.093624  633180 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:25:07.094028  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:07.097836  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:25:07.100837  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.224505  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.238770  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:25:07.238907  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:25:07.239230  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m02" to be "Ready" ...
	W1017 20:25:17.242440  633180 node_ready.go:55] error getting node "ha-858120-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02": net/http: TLS handshake timeout
	I1017 20:25:20.808419  633180 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02"
	I1017 20:25:27.596126  633180 node_ready.go:49] node "ha-858120-m02" is "Ready"
	I1017 20:25:27.596154  633180 node_ready.go:38] duration metric: took 20.356898962s for node "ha-858120-m02" to be "Ready" ...
	I1017 20:25:27.596166  633180 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:25:27.596229  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.096580  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.597221  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.097036  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.596474  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.096742  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.596355  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.621450  633180 api_server.go:72] duration metric: took 23.527778082s to wait for apiserver process to appear ...
	I1017 20:25:30.621472  633180 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:25:30.621491  633180 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 20:25:30.643810  633180 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 20:25:30.645148  633180 api_server.go:141] control plane version: v1.34.1
	I1017 20:25:30.645172  633180 api_server.go:131] duration metric: took 23.693241ms to wait for apiserver health ...
	I1017 20:25:30.645181  633180 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:25:30.668363  633180 system_pods.go:59] 26 kube-system pods found
	I1017 20:25:30.668458  633180 system_pods.go:61] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668491  633180 system_pods.go:61] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668536  633180 system_pods.go:61] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.668570  633180 system_pods.go:61] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.668614  633180 system_pods.go:61] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.668638  633180 system_pods.go:61] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.668658  633180 system_pods.go:61] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.668690  633180 system_pods.go:61] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.668714  633180 system_pods.go:61] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.668741  633180 system_pods.go:61] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.668778  633180 system_pods.go:61] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.668811  633180 system_pods.go:61] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.668837  633180 system_pods.go:61] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.668879  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.668901  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.668934  633180 system_pods.go:61] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.668958  633180 system_pods.go:61] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.668978  633180 system_pods.go:61] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.669017  633180 system_pods.go:61] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.669041  633180 system_pods.go:61] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.669067  633180 system_pods.go:61] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.669101  633180 system_pods.go:61] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.669127  633180 system_pods.go:61] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.669157  633180 system_pods.go:61] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.669188  633180 system_pods.go:61] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.669214  633180 system_pods.go:61] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.669236  633180 system_pods.go:74] duration metric: took 24.048955ms to wait for pod list to return data ...
	I1017 20:25:30.669273  633180 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:25:30.687415  633180 default_sa.go:45] found service account: "default"
	I1017 20:25:30.687489  633180 default_sa.go:55] duration metric: took 18.191795ms for default service account to be created ...
	I1017 20:25:30.687514  633180 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:25:30.762042  633180 system_pods.go:86] 26 kube-system pods found
	I1017 20:25:30.762148  633180 system_pods.go:89] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762181  633180 system_pods.go:89] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762224  633180 system_pods.go:89] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.762250  633180 system_pods.go:89] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.762273  633180 system_pods.go:89] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.762307  633180 system_pods.go:89] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.762330  633180 system_pods.go:89] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.762352  633180 system_pods.go:89] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.762387  633180 system_pods.go:89] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.762413  633180 system_pods.go:89] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.762435  633180 system_pods.go:89] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.762469  633180 system_pods.go:89] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.762497  633180 system_pods.go:89] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.762517  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.762554  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.762578  633180 system_pods.go:89] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.762599  633180 system_pods.go:89] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.762635  633180 system_pods.go:89] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.762662  633180 system_pods.go:89] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.762684  633180 system_pods.go:89] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.762717  633180 system_pods.go:89] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.762741  633180 system_pods.go:89] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.762760  633180 system_pods.go:89] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.762794  633180 system_pods.go:89] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.762816  633180 system_pods.go:89] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.762834  633180 system_pods.go:89] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.762855  633180 system_pods.go:126] duration metric: took 75.322066ms to wait for k8s-apps to be running ...
	I1017 20:25:30.762895  633180 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:25:30.762983  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:25:30.798874  633180 system_svc.go:56] duration metric: took 35.957427ms WaitForService to wait for kubelet
	I1017 20:25:30.798951  633180 kubeadm.go:586] duration metric: took 23.705274367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:25:30.798985  633180 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:25:30.805472  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805553  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805580  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805600  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805635  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805661  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805684  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805722  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805746  633180 node_conditions.go:105] duration metric: took 6.741948ms to run NodePressure ...
	I1017 20:25:30.805773  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:25:30.805824  633180 start.go:255] writing updated cluster config ...
	I1017 20:25:30.809328  633180 out.go:203] 
	I1017 20:25:30.812477  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:30.812660  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.816059  633180 out.go:179] * Starting "ha-858120-m03" control-plane node in "ha-858120" cluster
	I1017 20:25:30.819758  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:25:30.822780  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:25:30.825590  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:25:30.825654  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:25:30.825902  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:25:30.826027  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:25:30.826092  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:25:30.826241  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.865897  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:25:30.865917  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:25:30.865932  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:25:30.865956  633180 start.go:360] acquireMachinesLock for ha-858120-m03: {Name:mk0745e738c38fcaad2c00b3d5938ec5b18bc19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:25:30.866008  633180 start.go:364] duration metric: took 36.481µs to acquireMachinesLock for "ha-858120-m03"
	I1017 20:25:30.866027  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:25:30.866033  633180 fix.go:54] fixHost starting: m03
	I1017 20:25:30.866284  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:30.902472  633180 fix.go:112] recreateIfNeeded on ha-858120-m03: state=Stopped err=<nil>
	W1017 20:25:30.902498  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:25:30.906012  633180 out.go:252] * Restarting existing docker container for "ha-858120-m03" ...
	I1017 20:25:30.906100  633180 cli_runner.go:164] Run: docker start ha-858120-m03
	I1017 20:25:31.385666  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:31.416798  633180 kic.go:430] container "ha-858120-m03" state is running.
	I1017 20:25:31.417186  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:31.445988  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:31.446246  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:25:31.446327  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:31.476234  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:31.476543  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:31.476558  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:25:31.477171  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:25:34.759062  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:34.759091  633180 ubuntu.go:182] provisioning hostname "ha-858120-m03"
	I1017 20:25:34.759181  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:34.785061  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:34.785366  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:34.785384  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m03 && echo "ha-858120-m03" | sudo tee /etc/hostname
	I1017 20:25:35.026879  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:35.027037  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.055472  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:35.055775  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:35.055791  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:35.277230  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:35.277256  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:35.277273  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:35.277283  633180 provision.go:84] configureAuth start
	I1017 20:25:35.277348  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:35.311355  633180 provision.go:143] copyHostCerts
	I1017 20:25:35.311397  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311430  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:35.311438  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311519  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:35.311605  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311621  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:35.311626  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311652  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:35.311691  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311709  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:35.311713  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311737  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:35.311782  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m03 san=[127.0.0.1 192.168.49.4 ha-858120-m03 localhost minikube]
	I1017 20:25:35.867211  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:35.867305  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:35.867370  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.885861  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:36.014744  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:36.014818  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:36.078628  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:36.078695  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:36.159581  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:36.159683  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:25:36.221533  633180 provision.go:87] duration metric: took 944.235432ms to configureAuth
	I1017 20:25:36.221570  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:36.221864  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:36.222030  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:36.252315  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:36.252618  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:36.252633  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:37.901354  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:37.901379  633180 machine.go:96] duration metric: took 6.455113021s to provisionDockerMachine
	I1017 20:25:37.901397  633180 start.go:293] postStartSetup for "ha-858120-m03" (driver="docker")
	I1017 20:25:37.901423  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:37.901507  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:37.901580  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:37.931348  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.063033  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:38.067834  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:38.067869  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:38.067882  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:38.067943  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:38.068028  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:38.068035  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:38.068144  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:38.080413  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:38.111750  633180 start.go:296] duration metric: took 210.321276ms for postStartSetup
	I1017 20:25:38.111848  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:38.111903  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.139479  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.252206  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:38.257552  633180 fix.go:56] duration metric: took 7.391512723s for fixHost
	I1017 20:25:38.257574  633180 start.go:83] releasing machines lock for "ha-858120-m03", held for 7.39155818s
	I1017 20:25:38.257643  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:38.279335  633180 out.go:179] * Found network options:
	I1017 20:25:38.282289  633180 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 20:25:38.285193  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285225  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285250  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285261  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:38.285342  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:38.285383  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.285405  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:38.285456  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.309400  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.319419  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.495206  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:38.620333  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:38.620409  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:38.635710  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:38.635735  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:38.635766  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:38.635815  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:38.658258  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:38.677709  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:38.677780  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:38.695381  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:38.718728  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:38.983870  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:39.232982  633180 docker.go:234] disabling docker service ...
	I1017 20:25:39.233056  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:39.251900  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:39.268736  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:39.513181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:39.774360  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:39.795448  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:39.819737  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:39.819803  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.835507  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:39.835578  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.848330  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.863809  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.873655  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:39.886248  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.899031  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.910745  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.923167  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:39.945269  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:39.956015  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:40.185598  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:27:10.577185  633180 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.391479788s)
	I1017 20:27:10.577210  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:27:10.577270  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:27:10.581599  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:27:10.581663  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:27:10.586217  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:27:10.618110  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:27:10.618197  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.657726  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.690017  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:27:10.692996  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:27:10.695853  633180 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 20:27:10.698743  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:27:10.717568  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:27:10.721686  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:10.732598  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:27:10.732855  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:10.733110  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:27:10.755756  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:27:10.756043  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.4
	I1017 20:27:10.756057  633180 certs.go:195] generating shared ca certs ...
	I1017 20:27:10.756073  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:27:10.756206  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:27:10.756249  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:27:10.756259  633180 certs.go:257] generating profile certs ...
	I1017 20:27:10.756334  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:27:10.756400  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.daaf2b71
	I1017 20:27:10.756443  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:27:10.756456  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:27:10.756468  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:27:10.756484  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:27:10.756494  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:27:10.756505  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:27:10.756520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:27:10.756531  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:27:10.756545  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:27:10.756595  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:27:10.756627  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:27:10.756639  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:27:10.756664  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:27:10.756689  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:27:10.756714  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:27:10.756760  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:27:10.756791  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:10.756807  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:27:10.756818  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:27:10.756875  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:27:10.776286  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:27:10.875440  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:27:10.879271  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:27:10.887346  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:27:10.890991  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:27:10.899445  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:27:10.902677  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:27:10.910747  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:27:10.914609  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:27:10.923275  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:27:10.927331  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:27:10.937614  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:27:10.941051  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:27:10.949375  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:27:10.970388  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:27:10.989978  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:27:11.024313  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:27:11.045252  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:27:11.067969  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:27:11.093977  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:27:11.116400  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:27:11.143991  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:27:11.165234  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:27:11.186154  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:27:11.204999  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:27:11.217584  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:27:11.231184  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:27:11.245544  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:27:11.258825  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:27:11.273380  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:27:11.288154  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:27:11.301714  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:27:11.307871  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:27:11.316139  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320071  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320164  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.360582  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:27:11.368911  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:27:11.386044  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389821  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389916  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.431364  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:27:11.439391  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:27:11.448231  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452172  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452235  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.493408  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:27:11.501304  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:27:11.505093  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:27:11.546404  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:27:11.588587  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:27:11.629385  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:27:11.670643  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:27:11.711584  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:27:11.752896  633180 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 20:27:11.752991  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:27:11.753019  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:27:11.753080  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:27:11.765738  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:27:11.765801  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:27:11.765864  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:27:11.773834  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:27:11.773902  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:27:11.782020  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:27:11.794989  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:27:11.809996  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:27:11.825247  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:27:11.828873  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:11.838796  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:11.986822  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.004552  633180 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:27:12.005009  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:12.009913  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:27:12.012573  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:12.166504  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.181240  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:27:12.181372  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:27:12.181669  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m03" to be "Ready" ...
	W1017 20:27:14.185938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:16.186949  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:18.685673  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:20.686393  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:23.185742  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:25.186041  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:27.686171  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:30.186140  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:32.685938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:34.686362  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:37.189099  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:39.685178  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:41.685898  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:43.686246  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:46.185981  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:48.186022  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:50.685565  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:53.185024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:55.185063  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:57.186756  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:59.685967  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:02.185450  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:04.685930  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:07.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:09.185945  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:11.685298  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:13.685825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:16.186173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:18.685675  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:21.185822  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:23.686024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:25.686653  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:27.688976  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:30.185995  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:32.685998  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:34.686062  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:37.185512  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:39.684946  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:41.685173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:43.685392  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:45.686411  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:48.185559  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:50.685010  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:52.685699  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:54.685799  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:57.185287  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:59.185541  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:01.186445  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:03.685663  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:05.686118  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:08.185421  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:10.185464  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:12.685166  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:14.685776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:16.686147  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:18.686284  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:21.185551  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:23.685297  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:26.185709  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:28.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:30.186229  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:32.685640  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:34.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:36.685906  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:39.185156  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:41.185196  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:43.185432  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:45.189065  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:47.685980  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:50.185249  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:52.186422  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:54.685912  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:57.185530  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:59.185859  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:01.187381  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:03.685399  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:05.685481  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:08.187943  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:10.689106  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:13.185786  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:15.685607  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:17.686048  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:19.686753  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:22.185049  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:24.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:26.685608  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:28.686143  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:31.185273  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:33.186568  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:35.685304  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:37.685459  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:39.685964  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:42.186035  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:44.186982  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:46.685781  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:49.185082  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:51.185419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:53.686212  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:56.185582  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:58.185659  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:00.222492  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:02.685725  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:04.686504  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:07.186161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:09.685238  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:11.685865  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:14.185500  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:16.185620  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:18.192262  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:20.686051  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:23.185373  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:25.686121  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:28.187578  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:30.689269  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:33.185825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:35.686100  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:38.186012  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:40.685515  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:42.685703  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:44.685871  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:47.185764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:49.685433  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:51.685733  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:54.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:56.685619  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:59.185113  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:01.185211  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:03.185561  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:05.186288  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:07.685440  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:09.685758  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:12.185776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:14.185887  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:16.685436  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:19.185337  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:21.686419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:24.186002  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:26.686017  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:29.185789  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:31.686359  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:34.185117  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:36.185746  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:38.185848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:40.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:43.185924  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:45.186873  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:47.685424  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:49.685760  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:52.185842  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:54.685648  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:57.185264  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:59.185532  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:01.186342  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:03.685323  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:05.685848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:07.686600  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:10.185305  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	I1017 20:33:12.181809  633180 node_ready.go:38] duration metric: took 6m0.000088857s for node "ha-858120-m03" to be "Ready" ...
	I1017 20:33:12.184950  633180 out.go:203] 
	W1017 20:33:12.187811  633180 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1017 20:33:12.187835  633180 out.go:285] * 
	W1017 20:33:12.189989  633180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:33:12.193072  633180 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 20:25:28 ha-858120 crio[661]: time="2025-10-17T20:25:28.449562303Z" level=info msg="Started container" PID=1184 containerID=dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd description=kube-system/coredns-66bc5c9577-hc5rq/coredns id=3b68bae1-e38f-42c1-bdab-f61b3987b2a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1860023794c840fe5be850bb22c178acfad4e2cba7c02a3af6ce14acb4379be7
	Oct 17 20:25:59 ha-858120 conmon[1152]: conmon e299f9f677259417858b <ninfo>: container 1163 exited with status 1
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.07219365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72b412f9-766c-4334-a938-00c3ec219964 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.076148095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=30596c8c-f29a-4d19-9d8b-b08ba7b6cf56 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082176481Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082434036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.09632455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096698857Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/passwd: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096725565Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/group: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.097064885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.125738481Z" level=info msg="Created container 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.129459847Z" level=info msg="Starting container: 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008" id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.13991512Z" level=info msg="Started container" PID=1399 containerID=5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008 description=kube-system/storage-provisioner/storage-provisioner id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.628460002Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632408867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.63244616Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632468453Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645565094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645599006Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645616032Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650549978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650585654Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650619608Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.654017064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.65405073Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b5162cc662da       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       3                   b8cc01892712d       storage-provisioner                 kube-system
	dc932b06eb666       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1860023794c84       coredns-66bc5c9577-hc5rq            kube-system
	f99357006a077       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   24158395efe09       coredns-66bc5c9577-zfbms            kube-system
	30fbb87d1faca       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   916eeadf90187       kube-proxy-5qtb8                    kube-system
	53ef170773eb6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d544ca125ccd8       busybox-7b57f96db7-jw7vx            default
	e299f9f677259       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       2                   b8cc01892712d       storage-provisioner                 kube-system
	97aba0e5d7c48       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   4485a8e917cbe       kindnet-7bwxv                       kube-system
	9ce296c3989a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	638256daf481d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   7 minutes ago       Running             kube-apiserver            2                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	ee8a159707f90       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dcbb9d5285b37       kube-vip-ha-858120                  kube-system
	62a0a9e565cbd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	09cba02ad2598       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   fce4c8d39b2df       etcd-ha-858120                      kube-system
	56f597b80ce9d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	7965630635b8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   8e060a6690898       kube-scheduler-ha-858120            kube-system
	
	
	==> coredns [dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60334 - 25267 "HINFO IN 5061499944827162834.2776303602288628219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03310744s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [f99357006a077698a85f223986d69f2d7d83e5bce90c1c2cc8ec2f393e14a413] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47468 - 22569 "HINFO IN 1283965037511611162.4618766947171906600. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039278336s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-858120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_18_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:18:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-858120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                8074ca1f-e50b-46a3-ae2a-18fe40cb596a
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jw7vx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hc5rq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-zfbms             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-858120                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7bwxv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-858120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-858120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5qtb8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-858120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-858120                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m44s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-858120 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   Starting                 8m17s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m17s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m17s (x8 over 8m17s)  kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m17s (x8 over 8m17s)  kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m17s (x8 over 8m17s)  kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	
	
	Name:               ha-858120-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:33:06 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:33:06 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:33:06 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:33:06 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-858120-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                86212adb-5900-4e82-861f-965be14c377b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8kb7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-858120-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-n44c4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-858120-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-858120-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wzlp2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-858120-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-858120-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m47s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m25s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 9m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m34s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m33s (x9 over 9m34s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m33s (x8 over 9m34s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m33s (x7 over 9m34s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9m1s                   node-controller  Node ha-858120-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           8m56s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 8m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m13s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m13s (x8 over 8m13s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m13s (x8 over 8m13s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m13s (x8 over 8m13s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m39s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           7m12s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	
	
	Name:               ha-858120-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_20_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:20:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:24:03 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 20:23:39 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 20:23:39 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 20:23:39 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 20:23:39 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-858120-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                2d947669-bb38-4697-a62c-b48c5ac1f2a6
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8llg5                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-858120-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-mk8st                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-ha-858120-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-ha-858120-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-52dzj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-ha-858120-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-858120-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        12m    kube-proxy       
	  Normal  RegisteredNode  12m    node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  RegisteredNode  12m    node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  RegisteredNode  8m56s  node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  RegisteredNode  7m39s  node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  RegisteredNode  7m12s  node-controller  Node ha-858120-m03 event: Registered Node ha-858120-m03 in Controller
	  Normal  NodeNotReady    6m49s  node-controller  Node ha-858120-m03 status is now: NodeNotReady
	
	
	Name:               ha-858120-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_22_19_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:22:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:24:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-858120-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                78570753-9906-4f75-b3e5-06c23a58a2cc
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jl4tq       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-proxy-cn926    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  10m (x3 over 10m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x3 over 10m)  kubelet          Node ha-858120-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x3 over 10m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           10m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-858120-m04 status is now: NodeReady
	  Normal  RegisteredNode           8m56s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m39s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m12s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeNotReady             6m49s              node-controller  Node ha-858120-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:18] overlayfs: idmapped layers are currently not supported
	[Oct17 20:19] overlayfs: idmapped layers are currently not supported
	[Oct17 20:20] overlayfs: idmapped layers are currently not supported
	[Oct17 20:22] overlayfs: idmapped layers are currently not supported
	[Oct17 20:23] overlayfs: idmapped layers are currently not supported
	[Oct17 20:24] overlayfs: idmapped layers are currently not supported
	[Oct17 20:25] overlayfs: idmapped layers are currently not supported
	[ +32.795830] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b] <==
	{"level":"warn","ts":"2025-10-17T20:32:47.933625Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:48.230104Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:48.230154Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:52.231259Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:52.231313Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:52.930662Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:52.933946Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:56.232962Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:56.233115Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:57.931080Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:32:57.934409Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:00.234782Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:00.235072Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:02.932529Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:02.934965Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236719Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236770Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939190Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939286Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238618Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238697Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240272Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240343Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940296Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940377Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 20:33:13 up  3:15,  0 user,  load average: 0.43, 0.81, 1.31
	Linux ha-858120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97aba0e5d7c482a104be9a87cd7b78aec663a93d84c72a85316a204d1548cc16] <==
	I1017 20:32:38.631566       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:32:48.628762       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:32:48.628795       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:32:48.628960       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:32:48.628974       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:32:48.629027       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:32:48.629038       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:32:48.629095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:48.629107       1 main.go:301] handling current node
	I1017 20:32:58.632651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:58.632696       1 main.go:301] handling current node
	I1017 20:32:58.632712       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:32:58.632718       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:32:58.632922       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:32:58.632931       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:32:58.633040       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:32:58.633048       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.624264       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:33:08.624367       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:33:08.624604       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:33:08.624662       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:33:08.624908       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:33:08.624952       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.625258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:33:08.625305       1 main.go:301] handling current node
	
	
	==> kube-apiserver [62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1] <==
	I1017 20:24:57.578002       1 server.go:150] Version: v1.34.1
	I1017 20:24:57.578115       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1017 20:24:59.609875       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1017 20:24:59.609983       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1017 20:24:59.610018       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1017 20:24:59.610051       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1017 20:24:59.610078       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1017 20:24:59.610108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1017 20:24:59.610137       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1017 20:24:59.610165       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1017 20:24:59.610193       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1017 20:24:59.610224       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1017 20:24:59.610253       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1017 20:24:59.610280       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1017 20:24:59.718012       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 20:24:59.731289       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1017 20:24:59.735252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1017 20:24:59.771928       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:24:59.798433       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1017 20:24:59.798556       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1017 20:24:59.798825       1 instance.go:239] Using reconciler: lease
	W1017 20:24:59.801239       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1017 20:25:19.800493       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [638256daf481df23c6dc0c5f0e0206e9031fe11c02f69b76b36adebb4f77751b] <==
	I1017 20:25:27.747267       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:25:27.748048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:25:27.748245       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:25:27.748283       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:25:27.755722       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:25:27.756801       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:25:27.756825       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:25:27.756833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:25:27.756839       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:25:27.757101       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:25:27.766983       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:25:27.769225       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:25:27.769488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:25:27.769529       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:25:27.804915       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1017 20:25:27.812648       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 20:25:27.814174       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:25:27.860617       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 20:25:27.881031       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 20:25:27.934437       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:25:28.456701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1017 20:25:29.034012       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 20:25:34.663674       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:25:34.733687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:25:47.528926       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589] <==
	I1017 20:24:58.662485       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:25:01.671944       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 20:25:01.678227       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:01.684843       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 20:25:01.685107       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 20:25:01.685542       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 20:25:01.685556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.598084       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [9ce296c3989a1de13e23cf6043950e41ef86d2754f0427491575c19984a6d824] <==
	I1017 20:25:34.377987       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:25:34.378051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:25:34.378105       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:25:34.378132       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:25:34.378160       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:25:34.387349       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:25:34.387499       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-858120-m04"
	I1017 20:25:34.392049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:25:34.392153       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:25:34.403270       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:25:34.410392       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:25:34.420898       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:25:34.428383       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:25:34.429955       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:25:34.452565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.452639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:25:34.461211       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:25:34.461569       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:25:34.467835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:25:34.467866       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:25:34.467873       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:25:34.491229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.517732       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:25:34.559435       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:31:26.912659       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-8llg5"
	
	
	==> kube-proxy [30fbb87d1faca7dfc4d9f64b418999dbb75c40979544bddc3ad099cb9ad1a052] <==
	I1017 20:25:29.123150       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:25:29.213307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:25:29.325124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:25:29.325227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 20:25:29.325371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:25:29.346595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:25:29.346714       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:25:29.351735       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:25:29.352127       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:25:29.352304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:29.353581       1 config.go:200] "Starting service config controller"
	I1017 20:25:29.353705       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:25:29.353767       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:25:29.353798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:25:29.353833       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:25:29.353860       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:25:29.354594       1 config.go:309] "Starting node config controller"
	I1017 20:25:29.357072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:25:29.357126       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:25:29.454278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:25:29.454382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:25:29.454408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6] <==
	I1017 20:25:27.558327       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:27.562717       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.562823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.563148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:25:27.563232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.643472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:25:27.643574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:25:27.643643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:25:27.643704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:25:27.643780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:25:27.643853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:25:27.643897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:25:27.643934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:25:27.643978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:25:27.644027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:25:27.644071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:25:27.644291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:25:27.644343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:25:27.644389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:25:27.644437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:25:27.644476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:25:27.644569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:25:27.644592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:25:27.685189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 20:25:29.163171       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891343     797 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891630     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.901895     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-cni-cfg\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902145     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-xtables-lock\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902349     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-xtables-lock\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902474     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-lib-modules\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902634     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-lib-modules\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902760     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9e9dfd7-e90a-4da3-969d-2669daa3d123-tmp\") pod \"storage-provisioner\" (UID: \"f9e9dfd7-e90a-4da3-969d-2669daa3d123\") " pod="kube-system/storage-provisioner"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.923822     797 scope.go:117] "RemoveContainer" containerID="56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	Oct 17 20:25:27 ha-858120 kubelet[797]: E1017 20:25:27.935263     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-858120\" already exists" pod="kube-system/etcd-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.935307     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.974898     797 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.006913     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-858120\" already exists" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.007149     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.035243     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-858120\" already exists" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.083154     797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-858120" podStartSLOduration=1.083097255 podStartE2EDuration="1.083097255s" podCreationTimestamp="2025-10-17 20:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:25:28.06330839 +0000 UTC m=+31.423690073" watchObservedRunningTime="2025-10-17 20:25:28.083097255 +0000 UTC m=+31.443478937"
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.161072     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d WatchSource:0}: Error finding container b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d: Status 404 returned error can't find the container with id b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.182019     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b WatchSource:0}: Error finding container d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b: Status 404 returned error can't find the container with id d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.276364     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f WatchSource:0}: Error finding container 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f: Status 404 returned error can't find the container with id 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.819931     797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8957fe84f5b782b1a91a47b00072c3" path="/var/lib/kubelet/pods/fc8957fe84f5b782b1a91a47b00072c3/volumes"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.778662     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.778732     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.779364     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.779405     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist"
	Oct 17 20:25:59 ha-858120 kubelet[797]: I1017 20:25:59.061021     797 scope.go:117] "RemoveContainer" containerID="e299f9f677259417858bfdf991397b3ef57a6485f2baf285eaece413087c058b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-858120 -n ha-858120
helpers_test.go:269: (dbg) Run:  kubectl --context ha-858120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-twgcq
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq
helpers_test.go:290: (dbg) kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-twgcq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b2h7l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b2h7l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  108s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (543.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 node delete m03 --alsologtostderr -v 5: (5.641314839s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: exit status 7 (608.533379ms)

                                                
                                                
-- stdout --
	ha-858120
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858120-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858120-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:33:20.745419  639358 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:33:20.745633  639358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:33:20.745660  639358 out.go:374] Setting ErrFile to fd 2...
	I1017 20:33:20.745679  639358 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:33:20.745979  639358 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:33:20.746217  639358 out.go:368] Setting JSON to false
	I1017 20:33:20.746278  639358 mustload.go:65] Loading cluster: ha-858120
	I1017 20:33:20.746354  639358 notify.go:220] Checking for updates...
	I1017 20:33:20.747611  639358 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:33:20.747638  639358 status.go:174] checking status of ha-858120 ...
	I1017 20:33:20.748234  639358 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:33:20.768919  639358 status.go:371] ha-858120 host status = "Running" (err=<nil>)
	I1017 20:33:20.768942  639358 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:33:20.769230  639358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:33:20.788596  639358 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:33:20.788879  639358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:33:20.788935  639358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:33:20.812076  639358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:33:20.917121  639358 ssh_runner.go:195] Run: systemctl --version
	I1017 20:33:20.925303  639358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:33:20.941894  639358 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:33:21.015909  639358 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:33:21.004843898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:33:21.016485  639358 kubeconfig.go:125] found "ha-858120" server: "https://192.168.49.254:8443"
	I1017 20:33:21.016521  639358 api_server.go:166] Checking apiserver status ...
	I1017 20:33:21.016566  639358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:33:21.029270  639358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1054/cgroup
	I1017 20:33:21.037921  639358 api_server.go:182] apiserver freezer: "6:freezer:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio/crio-638256daf481df23c6dc0c5f0e0206e9031fe11c02f69b76b36adebb4f77751b"
	I1017 20:33:21.037988  639358 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio/crio-638256daf481df23c6dc0c5f0e0206e9031fe11c02f69b76b36adebb4f77751b/freezer.state
	I1017 20:33:21.045398  639358 api_server.go:204] freezer state: "THAWED"
	I1017 20:33:21.045469  639358 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 20:33:21.055447  639358 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 20:33:21.055479  639358 status.go:463] ha-858120 apiserver status = Running (err=<nil>)
	I1017 20:33:21.055494  639358 status.go:176] ha-858120 status: &{Name:ha-858120 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:33:21.055511  639358 status.go:174] checking status of ha-858120-m02 ...
	I1017 20:33:21.055831  639358 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:33:21.081680  639358 status.go:371] ha-858120-m02 host status = "Running" (err=<nil>)
	I1017 20:33:21.081708  639358 host.go:66] Checking if "ha-858120-m02" exists ...
	I1017 20:33:21.082027  639358 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:33:21.099687  639358 host.go:66] Checking if "ha-858120-m02" exists ...
	I1017 20:33:21.100008  639358 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:33:21.100059  639358 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:33:21.118836  639358 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:33:21.220781  639358 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:33:21.234699  639358 kubeconfig.go:125] found "ha-858120" server: "https://192.168.49.254:8443"
	I1017 20:33:21.234730  639358 api_server.go:166] Checking apiserver status ...
	I1017 20:33:21.234793  639358 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:33:21.246156  639358 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1195/cgroup
	I1017 20:33:21.254708  639358 api_server.go:182] apiserver freezer: "6:freezer:/docker/a909c6a1311874f15e7e530cc73981436ca3c6837d1db7441a471bd80b1ccb91/crio/crio-076b2863637b83e3d07aef25283061e2a0bcc3e3bfa03181c1070ceae5f80796"
	I1017 20:33:21.254834  639358 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a909c6a1311874f15e7e530cc73981436ca3c6837d1db7441a471bd80b1ccb91/crio/crio-076b2863637b83e3d07aef25283061e2a0bcc3e3bfa03181c1070ceae5f80796/freezer.state
	I1017 20:33:21.262408  639358 api_server.go:204] freezer state: "THAWED"
	I1017 20:33:21.262434  639358 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 20:33:21.270805  639358 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 20:33:21.270837  639358 status.go:463] ha-858120-m02 apiserver status = Running (err=<nil>)
	I1017 20:33:21.270847  639358 status.go:176] ha-858120-m02 status: &{Name:ha-858120-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:33:21.270864  639358 status.go:174] checking status of ha-858120-m04 ...
	I1017 20:33:21.271213  639358 cli_runner.go:164] Run: docker container inspect ha-858120-m04 --format={{.State.Status}}
	I1017 20:33:21.290055  639358 status.go:371] ha-858120-m04 host status = "Stopped" (err=<nil>)
	I1017 20:33:21.290079  639358 status.go:384] host is not running, skipping remaining checks
	I1017 20:33:21.290087  639358 status.go:176] ha-858120-m04 status: &{Name:ha-858120-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-858120
helpers_test.go:243: (dbg) docker inspect ha-858120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	        "Created": "2025-10-17T20:18:20.77215583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:24:49.89013736Z",
	            "FinishedAt": "2025-10-17T20:24:49.310249081Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hostname",
	        "HostsPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hosts",
	        "LogPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196-json.log",
	        "Name": "/ha-858120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-858120:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-858120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	                "LowerDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-858120",
	                "Source": "/var/lib/docker/volumes/ha-858120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-858120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-858120",
	                "name.minikube.sigs.k8s.io": "ha-858120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30368a165299690b2c1e64ba7fbd000063595e2b8330a6a0386fe8ae84472e14",
	            "SandboxKey": "/var/run/docker/netns/30368a165299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-858120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:7a:f4:71:ea:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a78c784685bd8e7296863536d4a6677a78ffb6c83e55d8ef3ae48685090ce7d1",
	                    "EndpointID": "2f35a70319780f77f7bb419c5c8b2a8ea449f45b75f1d2c0d0564b394c3bec61",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-858120",
	                        "0886947eb334"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-858120 -n ha-858120
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 logs -n 25: (1.398884295s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp testdata/cp-test.txt ha-858120-m04:/home/docker/cp-test.txt                                                             │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m04.txt │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m04_ha-858120.txt                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120.txt                                                 │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node start m02 --alsologtostderr -v 5                                                                                      │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:24 UTC │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ stop    │ ha-858120 stop --alsologtostderr -v 5                                                                                                │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │ 17 Oct 25 20:24 UTC │
	│ start   │ ha-858120 start --wait true --alsologtostderr -v 5                                                                                   │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:33 UTC │                     │
	│ node    │ ha-858120 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:33 UTC │ 17 Oct 25 20:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:24:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:24:49.626381  633180 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:24:49.626517  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626528  633180 out.go:374] Setting ErrFile to fd 2...
	I1017 20:24:49.626533  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626788  633180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:24:49.627220  633180 out.go:368] Setting JSON to false
	I1017 20:24:49.628041  633180 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11236,"bootTime":1760721454,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:24:49.628110  633180 start.go:141] virtualization:  
	I1017 20:24:49.633530  633180 out.go:179] * [ha-858120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:24:49.636591  633180 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:24:49.636670  633180 notify.go:220] Checking for updates...
	I1017 20:24:49.642574  633180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:24:49.645486  633180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:49.648436  633180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:24:49.651294  633180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:24:49.654188  633180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:24:49.657632  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:49.657777  633180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:24:49.688170  633180 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:24:49.688301  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.745303  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.735869738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.745414  633180 docker.go:318] overlay module found
	I1017 20:24:49.748552  633180 out.go:179] * Using the docker driver based on existing profile
	I1017 20:24:49.751497  633180 start.go:305] selected driver: docker
	I1017 20:24:49.751513  633180 start.go:925] validating driver "docker" against &{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.751702  633180 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:24:49.751804  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.806673  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.798122578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.807082  633180 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:24:49.807153  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:49.807223  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:49.807278  633180 start.go:349] cluster config:
	{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.810596  633180 out.go:179] * Starting "ha-858120" primary control-plane node in "ha-858120" cluster
	I1017 20:24:49.813288  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:49.816087  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:49.818802  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:49.818879  633180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:24:49.818889  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:49.818892  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:49.819084  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:49.819096  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:49.819258  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:49.838368  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:49.838387  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:49.838401  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:49.838423  633180 start.go:360] acquireMachinesLock for ha-858120: {Name:mk62278368bd1da921b0ccf6844a662f4fa595df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:49.838475  633180 start.go:364] duration metric: took 34.511µs to acquireMachinesLock for "ha-858120"
	I1017 20:24:49.838494  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:49.838499  633180 fix.go:54] fixHost starting: 
	I1017 20:24:49.838762  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:49.855336  633180 fix.go:112] recreateIfNeeded on ha-858120: state=Stopped err=<nil>
	W1017 20:24:49.855369  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:49.858630  633180 out.go:252] * Restarting existing docker container for "ha-858120" ...
	I1017 20:24:49.858710  633180 cli_runner.go:164] Run: docker start ha-858120
	I1017 20:24:50.114094  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:50.136057  633180 kic.go:430] container "ha-858120" state is running.
	I1017 20:24:50.136454  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:50.160255  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:50.160500  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:50.160583  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:50.184023  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:50.184342  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:50.184352  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:50.185019  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40052->127.0.0.1:33552: read: connection reset by peer
	I1017 20:24:53.330671  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.330705  633180 ubuntu.go:182] provisioning hostname "ha-858120"
	I1017 20:24:53.330778  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.348402  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.348733  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.348751  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120 && echo "ha-858120" | sudo tee /etc/hostname
	I1017 20:24:53.508835  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.508970  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.526510  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.526830  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.526846  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:24:53.671383  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:24:53.671409  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:24:53.671452  633180 ubuntu.go:190] setting up certificates
	I1017 20:24:53.671461  633180 provision.go:84] configureAuth start
	I1017 20:24:53.671530  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:53.689159  633180 provision.go:143] copyHostCerts
	I1017 20:24:53.689210  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689244  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:24:53.689256  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689334  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:24:53.689461  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689496  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:24:53.689506  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689536  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:24:53.689582  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689603  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:24:53.689611  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689635  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:24:53.689684  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120 san=[127.0.0.1 192.168.49.2 ha-858120 localhost minikube]
	I1017 20:24:54.151535  633180 provision.go:177] copyRemoteCerts
	I1017 20:24:54.151620  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:24:54.151667  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.170207  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.274864  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:24:54.274925  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:24:54.292724  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:24:54.292785  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 20:24:54.311391  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:24:54.311452  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:24:54.329407  633180 provision.go:87] duration metric: took 657.913595ms to configureAuth
	I1017 20:24:54.329435  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:24:54.329671  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:54.329775  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.347176  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:54.347484  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:54.347504  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:24:54.678767  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:24:54.678791  633180 machine.go:96] duration metric: took 4.518274151s to provisionDockerMachine
	I1017 20:24:54.678802  633180 start.go:293] postStartSetup for "ha-858120" (driver="docker")
	I1017 20:24:54.678813  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:24:54.678876  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:24:54.678922  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.699409  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.802879  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:24:54.806060  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:24:54.806088  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:24:54.806100  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:24:54.806152  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:24:54.806232  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:24:54.806239  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:24:54.806342  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:24:54.813547  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:54.830587  633180 start.go:296] duration metric: took 151.77042ms for postStartSetup
	I1017 20:24:54.830688  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:24:54.830734  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.847827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.948374  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:24:54.953275  633180 fix.go:56] duration metric: took 5.114768478s for fixHost
	I1017 20:24:54.953301  633180 start.go:83] releasing machines lock for "ha-858120", held for 5.114818193s
	I1017 20:24:54.953368  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:54.969761  633180 ssh_runner.go:195] Run: cat /version.json
	I1017 20:24:54.969816  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.970081  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:24:54.970130  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.994236  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.003341  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.198024  633180 ssh_runner.go:195] Run: systemctl --version
	I1017 20:24:55.204628  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:24:55.242919  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:24:55.247648  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:24:55.247728  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:24:55.255380  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:24:55.255403  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:24:55.255433  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:24:55.255479  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:24:55.270476  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:24:55.283296  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:24:55.283382  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:24:55.298839  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:24:55.311724  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:24:55.424434  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:24:55.537289  633180 docker.go:234] disabling docker service ...
	I1017 20:24:55.537361  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:24:55.553026  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:24:55.566351  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:24:55.681250  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:24:55.798405  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:24:55.811378  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:24:55.825585  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:24:55.825661  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.834063  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:24:55.834172  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.843151  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.851611  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.860130  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:24:55.867797  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.876324  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.884581  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.892952  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:24:55.900323  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:24:55.907965  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.021101  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:24:56.158831  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:24:56.158928  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:24:56.162776  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:24:56.162859  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:24:56.166390  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:24:56.192830  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:24:56.192972  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.221409  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.254422  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:24:56.257178  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:24:56.271792  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:24:56.275653  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.285727  633180 kubeadm.go:883] updating cluster {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:24:56.285880  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:56.285942  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.320941  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.320965  633180 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:24:56.321020  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.345716  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.345741  633180 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:24:56.345750  633180 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 20:24:56.345858  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:24:56.345940  633180 ssh_runner.go:195] Run: crio config
	I1017 20:24:56.409511  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:56.409542  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:56.409567  633180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:24:56.409589  633180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-858120 NodeName:ha-858120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:24:56.410072  633180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-858120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:24:56.410096  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:24:56.410163  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:24:56.425787  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:56.425947  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:24:56.426028  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:24:56.433575  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:24:56.433642  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 20:24:56.441456  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 20:24:56.453796  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:24:56.466376  633180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 20:24:56.480780  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:24:56.493351  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:24:56.497083  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.507006  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.614355  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:24:56.631138  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.2
	I1017 20:24:56.631170  633180 certs.go:195] generating shared ca certs ...
	I1017 20:24:56.631205  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:56.631352  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:24:56.631435  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:24:56.631448  633180 certs.go:257] generating profile certs ...
	I1017 20:24:56.631532  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:24:56.631567  633180 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f
	I1017 20:24:56.631581  633180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 20:24:57.260314  633180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f ...
	I1017 20:24:57.260390  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f: {Name:mk0eeb82ef1c3e333bd14f384361a665d81ea399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260624  633180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f ...
	I1017 20:24:57.260661  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f: {Name:mkd9170cb1ed384cce4c4204f35083d5972d0281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260803  633180 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt
	I1017 20:24:57.260987  633180 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key
	I1017 20:24:57.261179  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:24:57.261215  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:24:57.261249  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:24:57.261296  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:24:57.261335  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:24:57.261369  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:24:57.261415  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:24:57.261450  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:24:57.261591  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:24:57.261674  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:24:57.261740  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:24:57.261777  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:24:57.261824  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:24:57.261878  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:24:57.261950  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:24:57.262030  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:57.262099  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.262148  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.262186  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.262769  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:24:57.292641  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:24:57.324994  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:24:57.350011  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:24:57.393934  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:24:57.425087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:24:57.476207  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:24:57.521477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:24:57.553659  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:24:57.581891  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:24:57.616931  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:24:57.653395  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:24:57.676685  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:24:57.687849  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:24:57.697063  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701415  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701527  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.748713  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:24:57.761692  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:24:57.778101  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782605  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782719  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.851750  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:24:57.860250  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:24:57.872947  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877259  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877426  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.935424  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:24:57.948490  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:24:57.952867  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:24:58.010016  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:24:58.063976  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:24:58.108039  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:24:58.150227  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:24:58.194750  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:24:58.245919  633180 kubeadm.go:400] StartCluster: {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:58.246100  633180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:24:58.246199  633180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:24:58.291268  633180 cri.go:89] found id: "ee8a159707f901bec7d65f64a977c75fa75282a553082688f13964bab6bed5f2"
	I1017 20:24:58.291334  633180 cri.go:89] found id: "62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1"
	I1017 20:24:58.291353  633180 cri.go:89] found id: "09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b"
	I1017 20:24:58.291371  633180 cri.go:89] found id: "56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	I1017 20:24:58.291391  633180 cri.go:89] found id: "7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6"
	I1017 20:24:58.291421  633180 cri.go:89] found id: ""
	I1017 20:24:58.291493  633180 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:24:58.311475  633180 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:24:58Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:24:58.311623  633180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:24:58.320631  633180 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:24:58.320702  633180 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:24:58.320786  633180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:24:58.333311  633180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:58.333829  633180 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-858120" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.333984  633180 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "ha-858120" cluster setting kubeconfig missing "ha-858120" context setting]
	I1017 20:24:58.334333  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.334925  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:24:58.335797  633180 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:24:58.335856  633180 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 20:24:58.335916  633180 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:24:58.335942  633180 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:24:58.335963  633180 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:24:58.335987  633180 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:24:58.336351  633180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:24:58.349523  633180 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 20:24:58.349592  633180 kubeadm.go:601] duration metric: took 28.869563ms to restartPrimaryControlPlane
	I1017 20:24:58.349615  633180 kubeadm.go:402] duration metric: took 103.705091ms to StartCluster
	I1017 20:24:58.349647  633180 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.349744  633180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.350418  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.350679  633180 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:24:58.350724  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:24:58.350749  633180 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:24:58.351348  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.356477  633180 out.go:179] * Enabled addons: 
	I1017 20:24:58.359610  633180 addons.go:514] duration metric: took 8.847324ms for enable addons: enabled=[]
	I1017 20:24:58.359682  633180 start.go:246] waiting for cluster config update ...
	I1017 20:24:58.359707  633180 start.go:255] writing updated cluster config ...
	I1017 20:24:58.363052  633180 out.go:203] 
	I1017 20:24:58.366186  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.366342  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.369685  633180 out.go:179] * Starting "ha-858120-m02" control-plane node in "ha-858120" cluster
	I1017 20:24:58.372589  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:58.375487  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:58.378319  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:58.378348  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:58.378444  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:58.378455  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:58.378576  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.378776  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:58.404390  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:58.404414  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:58.404426  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:58.404451  633180 start.go:360] acquireMachinesLock for ha-858120-m02: {Name:mk29f876727465da439698dbf4948f688d19b698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:58.404504  633180 start.go:364] duration metric: took 36.981µs to acquireMachinesLock for "ha-858120-m02"
	I1017 20:24:58.404523  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:58.404529  633180 fix.go:54] fixHost starting: m02
	I1017 20:24:58.404783  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.432805  633180 fix.go:112] recreateIfNeeded on ha-858120-m02: state=Stopped err=<nil>
	W1017 20:24:58.432831  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:58.436247  633180 out.go:252] * Restarting existing docker container for "ha-858120-m02" ...
	I1017 20:24:58.436330  633180 cli_runner.go:164] Run: docker start ha-858120-m02
	I1017 20:24:58.871041  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.895697  633180 kic.go:430] container "ha-858120-m02" state is running.
	I1017 20:24:58.896208  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:24:58.931596  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.931856  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:58.931915  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:24:58.966121  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:58.966428  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:24:58.966438  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:58.967202  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57724->127.0.0.1:33557: read: connection reset by peer
	I1017 20:25:02.146984  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.147066  633180 ubuntu.go:182] provisioning hostname "ha-858120-m02"
	I1017 20:25:02.147179  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.180883  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.181193  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.181204  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m02 && echo "ha-858120-m02" | sudo tee /etc/hostname
	I1017 20:25:02.371014  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.371118  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.406904  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.407240  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.407264  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:02.593559  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:02.593637  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:02.593669  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:02.593708  633180 provision.go:84] configureAuth start
	I1017 20:25:02.593805  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:02.623320  633180 provision.go:143] copyHostCerts
	I1017 20:25:02.623365  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623400  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:02.623407  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623486  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:02.623563  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623580  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:02.623584  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623609  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:02.623646  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623662  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:02.623666  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623694  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:02.623738  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m02 san=[127.0.0.1 192.168.49.3 ha-858120-m02 localhost minikube]
	I1017 20:25:02.747705  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:02.747782  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:02.747828  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.766757  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:02.880520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:02.880580  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:02.906371  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:02.906496  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:02.945019  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:02.945087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:25:02.987301  633180 provision.go:87] duration metric: took 393.559503ms to configureAuth
	I1017 20:25:02.987344  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:02.987585  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:02.987711  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.018499  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:03.018813  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:03.018831  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:03.435808  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:03.435834  633180 machine.go:96] duration metric: took 4.503969223s to provisionDockerMachine
	I1017 20:25:03.435844  633180 start.go:293] postStartSetup for "ha-858120-m02" (driver="docker")
	I1017 20:25:03.435855  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:03.435916  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:03.435964  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.455906  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.562871  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:03.566432  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:03.566502  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:03.566518  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:03.566584  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:03.566666  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:03.566676  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:03.566778  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:03.574445  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:03.599633  633180 start.go:296] duration metric: took 163.773711ms for postStartSetup
	I1017 20:25:03.599729  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:03.599785  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.627245  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.741852  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:03.758675  633180 fix.go:56] duration metric: took 5.354138506s for fixHost
	I1017 20:25:03.758698  633180 start.go:83] releasing machines lock for "ha-858120-m02", held for 5.354185538s
	I1017 20:25:03.758773  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:03.786714  633180 out.go:179] * Found network options:
	I1017 20:25:03.789819  633180 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 20:25:03.793065  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:03.793118  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:03.793187  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:03.793246  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.793459  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:03.793525  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.843024  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.846827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:04.116601  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:04.182522  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:04.182658  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:04.199347  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:04.199411  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:04.199459  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:04.199536  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:04.224421  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:04.246523  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:04.246695  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:04.274907  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:04.293080  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:04.507388  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:04.744373  633180 docker.go:234] disabling docker service ...
	I1017 20:25:04.744489  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:04.763912  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:04.778471  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:04.999181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:05.212501  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:05.227293  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:05.243392  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:05.243504  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.253121  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:05.253268  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.262917  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.272790  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.282153  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:05.291008  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.300670  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.310655  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.320320  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:05.328861  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:05.337217  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:05.542704  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:25:05.766295  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:25:05.766406  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:25:05.770528  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:25:05.770594  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:25:05.774319  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:25:05.802224  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:25:05.802316  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.832543  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.868559  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:25:05.871619  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:25:05.874677  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:25:05.891324  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:25:05.895481  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:05.906398  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:25:05.906643  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:05.906915  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:25:05.924891  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:25:05.925180  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.3
	I1017 20:25:05.925188  633180 certs.go:195] generating shared ca certs ...
	I1017 20:25:05.925202  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:25:05.925333  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:25:05.925371  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:25:05.925378  633180 certs.go:257] generating profile certs ...
	I1017 20:25:05.925461  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:25:05.925516  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.75ce5734
	I1017 20:25:05.925554  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:25:05.925562  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:25:05.925574  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:25:05.925587  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:25:05.925602  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:25:05.925612  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:25:05.925624  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:25:05.925635  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:25:05.925645  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:25:05.925695  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:25:05.925722  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:25:05.925731  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:25:05.925756  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:25:05.925779  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:25:05.925801  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:25:05.925843  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:05.925869  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:05.925885  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:25:05.925895  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:25:05.925947  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:25:05.942775  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:25:06.039567  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:25:06.043552  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:25:06.051886  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:25:06.055650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:25:06.071273  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:25:06.074980  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:25:06.084033  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:25:06.087747  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:25:06.095897  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:25:06.099650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:25:06.109034  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:25:06.112875  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:25:06.121486  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:25:06.140459  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:25:06.159242  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:25:06.177880  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:25:06.196379  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:25:06.214366  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:25:06.232392  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:25:06.250082  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:25:06.268477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:25:06.287023  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:25:06.306305  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:25:06.325727  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:25:06.339132  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:25:06.351861  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:25:06.364957  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:25:06.378148  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:25:06.391750  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:25:06.405157  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:25:06.418865  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:25:06.425313  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:25:06.433695  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437626  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437740  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.479551  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:25:06.487333  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:25:06.495467  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.498961  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.499069  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.541081  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:25:06.549258  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:25:06.557861  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.561976  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.562057  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.604418  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:25:06.612470  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:25:06.616274  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:25:06.657319  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:25:06.701813  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:25:06.745127  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:25:06.787373  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:25:06.830322  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:25:06.871900  633180 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 20:25:06.872035  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:25:06.872065  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:25:06.872127  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:25:06.885270  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:25:06.885337  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:25:06.885400  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:25:06.893245  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:25:06.893321  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:25:06.901109  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:25:06.914333  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:25:06.927147  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:25:06.941387  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:25:06.945076  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:06.954881  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.078941  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.093624  633180 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:25:07.094028  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:07.097836  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:25:07.100837  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.224505  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.238770  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:25:07.238907  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:25:07.239230  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m02" to be "Ready" ...
	W1017 20:25:17.242440  633180 node_ready.go:55] error getting node "ha-858120-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02": net/http: TLS handshake timeout
	I1017 20:25:20.808419  633180 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02"
	I1017 20:25:27.596126  633180 node_ready.go:49] node "ha-858120-m02" is "Ready"
	I1017 20:25:27.596154  633180 node_ready.go:38] duration metric: took 20.356898962s for node "ha-858120-m02" to be "Ready" ...
	I1017 20:25:27.596166  633180 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:25:27.596229  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.096580  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.597221  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.097036  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.596474  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.096742  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.596355  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.621450  633180 api_server.go:72] duration metric: took 23.527778082s to wait for apiserver process to appear ...
	I1017 20:25:30.621472  633180 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:25:30.621491  633180 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 20:25:30.643810  633180 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 20:25:30.645148  633180 api_server.go:141] control plane version: v1.34.1
	I1017 20:25:30.645172  633180 api_server.go:131] duration metric: took 23.693241ms to wait for apiserver health ...
	I1017 20:25:30.645181  633180 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:25:30.668363  633180 system_pods.go:59] 26 kube-system pods found
	I1017 20:25:30.668458  633180 system_pods.go:61] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668491  633180 system_pods.go:61] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668536  633180 system_pods.go:61] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.668570  633180 system_pods.go:61] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.668614  633180 system_pods.go:61] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.668638  633180 system_pods.go:61] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.668658  633180 system_pods.go:61] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.668690  633180 system_pods.go:61] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.668714  633180 system_pods.go:61] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.668741  633180 system_pods.go:61] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.668778  633180 system_pods.go:61] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.668811  633180 system_pods.go:61] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.668837  633180 system_pods.go:61] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.668879  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.668901  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.668934  633180 system_pods.go:61] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.668958  633180 system_pods.go:61] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.668978  633180 system_pods.go:61] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.669017  633180 system_pods.go:61] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.669041  633180 system_pods.go:61] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.669067  633180 system_pods.go:61] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.669101  633180 system_pods.go:61] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.669127  633180 system_pods.go:61] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.669157  633180 system_pods.go:61] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.669188  633180 system_pods.go:61] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.669214  633180 system_pods.go:61] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.669236  633180 system_pods.go:74] duration metric: took 24.048955ms to wait for pod list to return data ...
	I1017 20:25:30.669273  633180 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:25:30.687415  633180 default_sa.go:45] found service account: "default"
	I1017 20:25:30.687489  633180 default_sa.go:55] duration metric: took 18.191795ms for default service account to be created ...
	I1017 20:25:30.687514  633180 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:25:30.762042  633180 system_pods.go:86] 26 kube-system pods found
	I1017 20:25:30.762148  633180 system_pods.go:89] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762181  633180 system_pods.go:89] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762224  633180 system_pods.go:89] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.762250  633180 system_pods.go:89] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.762273  633180 system_pods.go:89] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.762307  633180 system_pods.go:89] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.762330  633180 system_pods.go:89] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.762352  633180 system_pods.go:89] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.762387  633180 system_pods.go:89] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.762413  633180 system_pods.go:89] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.762435  633180 system_pods.go:89] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.762469  633180 system_pods.go:89] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.762497  633180 system_pods.go:89] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.762517  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.762554  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.762578  633180 system_pods.go:89] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.762599  633180 system_pods.go:89] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.762635  633180 system_pods.go:89] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.762662  633180 system_pods.go:89] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.762684  633180 system_pods.go:89] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.762717  633180 system_pods.go:89] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.762741  633180 system_pods.go:89] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.762760  633180 system_pods.go:89] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.762794  633180 system_pods.go:89] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.762816  633180 system_pods.go:89] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.762834  633180 system_pods.go:89] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.762855  633180 system_pods.go:126] duration metric: took 75.322066ms to wait for k8s-apps to be running ...
	I1017 20:25:30.762895  633180 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:25:30.762983  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:25:30.798874  633180 system_svc.go:56] duration metric: took 35.957427ms WaitForService to wait for kubelet
	I1017 20:25:30.798951  633180 kubeadm.go:586] duration metric: took 23.705274367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:25:30.798985  633180 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:25:30.805472  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805553  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805580  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805600  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805635  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805661  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805684  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805722  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805746  633180 node_conditions.go:105] duration metric: took 6.741948ms to run NodePressure ...
	I1017 20:25:30.805773  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:25:30.805824  633180 start.go:255] writing updated cluster config ...
	I1017 20:25:30.809328  633180 out.go:203] 
	I1017 20:25:30.812477  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:30.812660  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.816059  633180 out.go:179] * Starting "ha-858120-m03" control-plane node in "ha-858120" cluster
	I1017 20:25:30.819758  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:25:30.822780  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:25:30.825590  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:25:30.825654  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:25:30.825902  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:25:30.826027  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:25:30.826092  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:25:30.826241  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.865897  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:25:30.865917  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:25:30.865932  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:25:30.865956  633180 start.go:360] acquireMachinesLock for ha-858120-m03: {Name:mk0745e738c38fcaad2c00b3d5938ec5b18bc19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:25:30.866008  633180 start.go:364] duration metric: took 36.481µs to acquireMachinesLock for "ha-858120-m03"
	I1017 20:25:30.866027  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:25:30.866033  633180 fix.go:54] fixHost starting: m03
	I1017 20:25:30.866284  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:30.902472  633180 fix.go:112] recreateIfNeeded on ha-858120-m03: state=Stopped err=<nil>
	W1017 20:25:30.902498  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:25:30.906012  633180 out.go:252] * Restarting existing docker container for "ha-858120-m03" ...
	I1017 20:25:30.906100  633180 cli_runner.go:164] Run: docker start ha-858120-m03
	I1017 20:25:31.385666  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:31.416798  633180 kic.go:430] container "ha-858120-m03" state is running.
	I1017 20:25:31.417186  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:31.445988  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:31.446246  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:25:31.446327  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:31.476234  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:31.476543  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:31.476558  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:25:31.477171  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:25:34.759062  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:34.759091  633180 ubuntu.go:182] provisioning hostname "ha-858120-m03"
	I1017 20:25:34.759181  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:34.785061  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:34.785366  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:34.785384  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m03 && echo "ha-858120-m03" | sudo tee /etc/hostname
	I1017 20:25:35.026879  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:35.027037  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.055472  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:35.055775  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:35.055791  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:35.277230  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:35.277256  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:35.277273  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:35.277283  633180 provision.go:84] configureAuth start
	I1017 20:25:35.277348  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:35.311355  633180 provision.go:143] copyHostCerts
	I1017 20:25:35.311397  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311430  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:35.311438  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311519  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:35.311605  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311621  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:35.311626  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311652  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:35.311691  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311709  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:35.311713  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311737  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:35.311782  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m03 san=[127.0.0.1 192.168.49.4 ha-858120-m03 localhost minikube]
	I1017 20:25:35.867211  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:35.867305  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:35.867370  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.885861  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:36.014744  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:36.014818  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:36.078628  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:36.078695  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:36.159581  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:36.159683  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:25:36.221533  633180 provision.go:87] duration metric: took 944.235432ms to configureAuth
	I1017 20:25:36.221570  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:36.221864  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:36.222030  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:36.252315  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:36.252618  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:36.252633  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:37.901354  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:37.901379  633180 machine.go:96] duration metric: took 6.455113021s to provisionDockerMachine
	I1017 20:25:37.901397  633180 start.go:293] postStartSetup for "ha-858120-m03" (driver="docker")
	I1017 20:25:37.901423  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:37.901507  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:37.901580  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:37.931348  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.063033  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:38.067834  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:38.067869  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:38.067882  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:38.067943  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:38.068028  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:38.068035  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:38.068144  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:38.080413  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:38.111750  633180 start.go:296] duration metric: took 210.321276ms for postStartSetup
	I1017 20:25:38.111848  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:38.111903  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.139479  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.252206  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:38.257552  633180 fix.go:56] duration metric: took 7.391512723s for fixHost
	I1017 20:25:38.257574  633180 start.go:83] releasing machines lock for "ha-858120-m03", held for 7.39155818s
	I1017 20:25:38.257643  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:38.279335  633180 out.go:179] * Found network options:
	I1017 20:25:38.282289  633180 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 20:25:38.285193  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285225  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285250  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285261  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:38.285342  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:38.285383  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.285405  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:38.285456  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.309400  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.319419  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.495206  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:38.620333  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:38.620409  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:38.635710  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:38.635735  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:38.635766  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:38.635815  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:38.658258  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:38.677709  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:38.677780  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:38.695381  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:38.718728  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:38.983870  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:39.232982  633180 docker.go:234] disabling docker service ...
	I1017 20:25:39.233056  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:39.251900  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:39.268736  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:39.513181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:39.774360  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:39.795448  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:39.819737  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:39.819803  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.835507  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:39.835578  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.848330  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.863809  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.873655  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:39.886248  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.899031  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.910745  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.923167  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:39.945269  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:39.956015  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:40.185598  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:27:10.577185  633180 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.391479788s)
	I1017 20:27:10.577210  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:27:10.577270  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:27:10.581599  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:27:10.581663  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:27:10.586217  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:27:10.618110  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:27:10.618197  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.657726  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.690017  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:27:10.692996  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:27:10.695853  633180 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 20:27:10.698743  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:27:10.717568  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:27:10.721686  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:10.732598  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:27:10.732855  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:10.733110  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:27:10.755756  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:27:10.756043  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.4
	I1017 20:27:10.756057  633180 certs.go:195] generating shared ca certs ...
	I1017 20:27:10.756073  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:27:10.756206  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:27:10.756249  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:27:10.756259  633180 certs.go:257] generating profile certs ...
	I1017 20:27:10.756334  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:27:10.756400  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.daaf2b71
	I1017 20:27:10.756443  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:27:10.756456  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:27:10.756468  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:27:10.756484  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:27:10.756494  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:27:10.756505  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:27:10.756520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:27:10.756531  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:27:10.756545  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:27:10.756595  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:27:10.756627  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:27:10.756639  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:27:10.756664  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:27:10.756689  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:27:10.756714  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:27:10.756760  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:27:10.756791  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:10.756807  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:27:10.756818  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:27:10.756875  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:27:10.776286  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:27:10.875440  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:27:10.879271  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:27:10.887346  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:27:10.890991  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:27:10.899445  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:27:10.902677  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:27:10.910747  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:27:10.914609  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:27:10.923275  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:27:10.927331  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:27:10.937614  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:27:10.941051  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:27:10.949375  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:27:10.970388  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:27:10.989978  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:27:11.024313  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:27:11.045252  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:27:11.067969  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:27:11.093977  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:27:11.116400  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:27:11.143991  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:27:11.165234  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:27:11.186154  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:27:11.204999  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:27:11.217584  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:27:11.231184  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:27:11.245544  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:27:11.258825  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:27:11.273380  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:27:11.288154  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:27:11.301714  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:27:11.307871  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:27:11.316139  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320071  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320164  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.360582  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:27:11.368911  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:27:11.386044  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389821  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389916  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.431364  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:27:11.439391  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:27:11.448231  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452172  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452235  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.493408  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:27:11.501304  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:27:11.505093  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:27:11.546404  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:27:11.588587  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:27:11.629385  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:27:11.670643  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:27:11.711584  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:27:11.752896  633180 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 20:27:11.752991  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:27:11.753019  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:27:11.753080  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:27:11.765738  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:27:11.765801  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:27:11.765864  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:27:11.773834  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:27:11.773902  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:27:11.782020  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:27:11.794989  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:27:11.809996  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:27:11.825247  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:27:11.828873  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:11.838796  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:11.986822  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.004552  633180 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:27:12.005009  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:12.009913  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:27:12.012573  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:12.166504  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.181240  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:27:12.181372  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:27:12.181669  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m03" to be "Ready" ...
	W1017 20:27:14.185938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:16.186949  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:18.685673  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:20.686393  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:23.185742  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:25.186041  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:27.686171  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:30.186140  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:32.685938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:34.686362  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:37.189099  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:39.685178  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:41.685898  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:43.686246  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:46.185981  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:48.186022  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:50.685565  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:53.185024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:55.185063  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:57.186756  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:59.685967  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:02.185450  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:04.685930  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:07.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:09.185945  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:11.685298  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:13.685825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:16.186173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:18.685675  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:21.185822  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:23.686024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:25.686653  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:27.688976  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:30.185995  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:32.685998  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:34.686062  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:37.185512  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:39.684946  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:41.685173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:43.685392  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:45.686411  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:48.185559  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:50.685010  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:52.685699  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:54.685799  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:57.185287  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:59.185541  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:01.186445  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:03.685663  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:05.686118  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:08.185421  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:10.185464  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:12.685166  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:14.685776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:16.686147  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:18.686284  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:21.185551  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:23.685297  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:26.185709  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:28.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:30.186229  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:32.685640  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:34.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:36.685906  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:39.185156  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:41.185196  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:43.185432  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:45.189065  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:47.685980  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:50.185249  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:52.186422  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:54.685912  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:57.185530  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:59.185859  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:01.187381  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:03.685399  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:05.685481  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:08.187943  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:10.689106  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:13.185786  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:15.685607  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:17.686048  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:19.686753  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:22.185049  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:24.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:26.685608  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:28.686143  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:31.185273  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:33.186568  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:35.685304  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:37.685459  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:39.685964  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:42.186035  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:44.186982  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:46.685781  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:49.185082  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:51.185419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:53.686212  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:56.185582  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:58.185659  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:00.222492  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:02.685725  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:04.686504  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:07.186161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:09.685238  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:11.685865  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:14.185500  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:16.185620  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:18.192262  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:20.686051  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:23.185373  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:25.686121  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:28.187578  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:30.689269  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:33.185825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:35.686100  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:38.186012  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:40.685515  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:42.685703  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:44.685871  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:47.185764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:49.685433  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:51.685733  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:54.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:56.685619  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:59.185113  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:01.185211  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:03.185561  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:05.186288  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:07.685440  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:09.685758  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:12.185776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:14.185887  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:16.685436  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:19.185337  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:21.686419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:24.186002  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:26.686017  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:29.185789  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:31.686359  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:34.185117  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:36.185746  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:38.185848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:40.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:43.185924  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:45.186873  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:47.685424  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:49.685760  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:52.185842  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:54.685648  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:57.185264  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:59.185532  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:01.186342  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:03.685323  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:05.685848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:07.686600  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:10.185305  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	I1017 20:33:12.181809  633180 node_ready.go:38] duration metric: took 6m0.000088857s for node "ha-858120-m03" to be "Ready" ...
	I1017 20:33:12.184950  633180 out.go:203] 
	W1017 20:33:12.187811  633180 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1017 20:33:12.187835  633180 out.go:285] * 
	W1017 20:33:12.189989  633180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:33:12.193072  633180 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 20:25:28 ha-858120 crio[661]: time="2025-10-17T20:25:28.449562303Z" level=info msg="Started container" PID=1184 containerID=dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd description=kube-system/coredns-66bc5c9577-hc5rq/coredns id=3b68bae1-e38f-42c1-bdab-f61b3987b2a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1860023794c840fe5be850bb22c178acfad4e2cba7c02a3af6ce14acb4379be7
	Oct 17 20:25:59 ha-858120 conmon[1152]: conmon e299f9f677259417858b <ninfo>: container 1163 exited with status 1
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.07219365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72b412f9-766c-4334-a938-00c3ec219964 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.076148095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=30596c8c-f29a-4d19-9d8b-b08ba7b6cf56 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082176481Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082434036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.09632455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096698857Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/passwd: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096725565Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/group: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.097064885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.125738481Z" level=info msg="Created container 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.129459847Z" level=info msg="Starting container: 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008" id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.13991512Z" level=info msg="Started container" PID=1399 containerID=5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008 description=kube-system/storage-provisioner/storage-provisioner id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.628460002Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632408867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.63244616Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632468453Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645565094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645599006Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645616032Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650549978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650585654Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650619608Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.654017064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.65405073Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b5162cc662da       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       3                   b8cc01892712d       storage-provisioner                 kube-system
	dc932b06eb666       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1860023794c84       coredns-66bc5c9577-hc5rq            kube-system
	f99357006a077       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   24158395efe09       coredns-66bc5c9577-zfbms            kube-system
	30fbb87d1faca       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   916eeadf90187       kube-proxy-5qtb8                    kube-system
	53ef170773eb6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d544ca125ccd8       busybox-7b57f96db7-jw7vx            default
	e299f9f677259       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       2                   b8cc01892712d       storage-provisioner                 kube-system
	97aba0e5d7c48       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   4485a8e917cbe       kindnet-7bwxv                       kube-system
	9ce296c3989a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	638256daf481d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            2                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	ee8a159707f90       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dcbb9d5285b37       kube-vip-ha-858120                  kube-system
	62a0a9e565cbd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	09cba02ad2598       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   fce4c8d39b2df       etcd-ha-858120                      kube-system
	56f597b80ce9d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	7965630635b8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   8e060a6690898       kube-scheduler-ha-858120            kube-system
	
	
	==> coredns [dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60334 - 25267 "HINFO IN 5061499944827162834.2776303602288628219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03310744s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [f99357006a077698a85f223986d69f2d7d83e5bce90c1c2cc8ec2f393e14a413] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47468 - 22569 "HINFO IN 1283965037511611162.4618766947171906600. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039278336s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-858120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_18_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:18:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-858120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                8074ca1f-e50b-46a3-ae2a-18fe40cb596a
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jw7vx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hc5rq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-zfbms             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-858120                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7bwxv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-858120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-858120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5qtb8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-858120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-858120                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m55s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m53s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-858120 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m26s (x8 over 8m26s)  kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m26s (x8 over 8m26s)  kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m26s (x8 over 8m26s)  kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	
	
	Name:               ha-858120-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-858120-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                86212adb-5900-4e82-861f-965be14c377b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8kb7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-858120-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-n44c4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-858120-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-858120-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wzlp2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-858120-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-858120-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m56s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m34s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 9m43s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m43s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m42s (x9 over 9m43s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m42s (x8 over 9m43s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m42s (x7 over 9m43s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9m10s                  node-controller  Node ha-858120-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           9m5s                   node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 8m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m22s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m22s (x8 over 8m22s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m22s (x8 over 8m22s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m22s (x8 over 8m22s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m48s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           7m21s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	
	
	Name:               ha-858120-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_22_19_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:22:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:24:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-858120-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                78570753-9906-4f75-b3e5-06c23a58a2cc
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jl4tq       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-cn926    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-858120-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m5s               node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m48s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m21s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeNotReady             6m58s              node-controller  Node ha-858120-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:18] overlayfs: idmapped layers are currently not supported
	[Oct17 20:19] overlayfs: idmapped layers are currently not supported
	[Oct17 20:20] overlayfs: idmapped layers are currently not supported
	[Oct17 20:22] overlayfs: idmapped layers are currently not supported
	[Oct17 20:23] overlayfs: idmapped layers are currently not supported
	[Oct17 20:24] overlayfs: idmapped layers are currently not supported
	[Oct17 20:25] overlayfs: idmapped layers are currently not supported
	[ +32.795830] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b] <==
	{"level":"warn","ts":"2025-10-17T20:33:02.934965Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236719Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236770Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939190Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939286Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238618Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238697Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240272Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240343Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940296Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940377Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:16.105926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:33:16.112935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:33:16.133256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39490","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:33:16.180694Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7668597845192143369 12593026477526642892)"}
	{"level":"info","ts":"2025-10-17T20:33:16.182674Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"c4547612b813713e","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-17T20:33:16.182729Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182771Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182804Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182869Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182891Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182907Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182945Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182957Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182979Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"c4547612b813713e"}
	
	
	==> kernel <==
	 20:33:22 up  3:15,  0 user,  load average: 0.43, 0.79, 1.30
	Linux ha-858120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97aba0e5d7c482a104be9a87cd7b78aec663a93d84c72a85316a204d1548cc16] <==
	I1017 20:32:48.629038       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:32:48.629095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:48.629107       1 main.go:301] handling current node
	I1017 20:32:58.632651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:58.632696       1 main.go:301] handling current node
	I1017 20:32:58.632712       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:32:58.632718       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:32:58.632922       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:32:58.632931       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:32:58.633040       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:32:58.633048       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.624264       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:33:08.624367       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:33:08.624604       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:33:08.624662       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:33:08.624908       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:33:08.624952       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.625258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:33:08.625305       1 main.go:301] handling current node
	I1017 20:33:18.624829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:33:18.624863       1 main.go:301] handling current node
	I1017 20:33:18.624879       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:33:18.624884       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:33:18.625048       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:33:18.625107       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1] <==
	I1017 20:24:57.578002       1 server.go:150] Version: v1.34.1
	I1017 20:24:57.578115       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1017 20:24:59.609875       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1017 20:24:59.609983       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1017 20:24:59.610018       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1017 20:24:59.610051       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1017 20:24:59.610078       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1017 20:24:59.610108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1017 20:24:59.610137       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1017 20:24:59.610165       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1017 20:24:59.610193       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1017 20:24:59.610224       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1017 20:24:59.610253       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1017 20:24:59.610280       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1017 20:24:59.718012       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 20:24:59.731289       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1017 20:24:59.735252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1017 20:24:59.771928       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:24:59.798433       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1017 20:24:59.798556       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1017 20:24:59.798825       1 instance.go:239] Using reconciler: lease
	W1017 20:24:59.801239       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1017 20:25:19.800493       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [638256daf481df23c6dc0c5f0e0206e9031fe11c02f69b76b36adebb4f77751b] <==
	I1017 20:25:27.747267       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:25:27.748048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:25:27.748245       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:25:27.748283       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:25:27.755722       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:25:27.756801       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:25:27.756825       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:25:27.756833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:25:27.756839       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:25:27.757101       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:25:27.766983       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:25:27.769225       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:25:27.769488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:25:27.769529       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:25:27.804915       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1017 20:25:27.812648       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 20:25:27.814174       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:25:27.860617       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 20:25:27.881031       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 20:25:27.934437       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:25:28.456701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1017 20:25:29.034012       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 20:25:34.663674       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:25:34.733687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:25:47.528926       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589] <==
	I1017 20:24:58.662485       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:25:01.671944       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 20:25:01.678227       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:01.684843       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 20:25:01.685107       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 20:25:01.685542       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 20:25:01.685556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.598084       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [9ce296c3989a1de13e23cf6043950e41ef86d2754f0427491575c19984a6d824] <==
	I1017 20:25:34.378051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:25:34.378105       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:25:34.378132       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:25:34.378160       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:25:34.387349       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:25:34.387499       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-858120-m04"
	I1017 20:25:34.392049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:25:34.392153       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:25:34.403270       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:25:34.410392       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:25:34.420898       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:25:34.428383       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:25:34.429955       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:25:34.452565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.452639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:25:34.461211       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:25:34.461569       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:25:34.467835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:25:34.467866       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:25:34.467873       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:25:34.491229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.517732       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:25:34.559435       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:31:26.912659       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-8llg5"
	E1017 20:33:16.861945       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-858120-m03\", UID:\"64932945-e010-4d53-bed9-a3728eabcfbb\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-858120-m03\", UID:\"f4d35634-a9f7-49a6-89bb-703ca753c231\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-858120-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [30fbb87d1faca7dfc4d9f64b418999dbb75c40979544bddc3ad099cb9ad1a052] <==
	I1017 20:25:29.123150       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:25:29.213307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:25:29.325124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:25:29.325227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 20:25:29.325371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:25:29.346595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:25:29.346714       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:25:29.351735       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:25:29.352127       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:25:29.352304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:29.353581       1 config.go:200] "Starting service config controller"
	I1017 20:25:29.353705       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:25:29.353767       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:25:29.353798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:25:29.353833       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:25:29.353860       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:25:29.354594       1 config.go:309] "Starting node config controller"
	I1017 20:25:29.357072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:25:29.357126       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:25:29.454278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:25:29.454382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:25:29.454408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6] <==
	I1017 20:25:27.558327       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:27.562717       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.562823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.563148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:25:27.563232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.643472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:25:27.643574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:25:27.643643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:25:27.643704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:25:27.643780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:25:27.643853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:25:27.643897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:25:27.643934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:25:27.643978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:25:27.644027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:25:27.644071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:25:27.644291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:25:27.644343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:25:27.644389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:25:27.644437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:25:27.644476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:25:27.644569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:25:27.644592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:25:27.685189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 20:25:29.163171       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891343     797 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891630     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.901895     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-cni-cfg\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902145     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-xtables-lock\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902349     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-xtables-lock\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902474     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-lib-modules\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902634     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-lib-modules\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902760     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9e9dfd7-e90a-4da3-969d-2669daa3d123-tmp\") pod \"storage-provisioner\" (UID: \"f9e9dfd7-e90a-4da3-969d-2669daa3d123\") " pod="kube-system/storage-provisioner"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.923822     797 scope.go:117] "RemoveContainer" containerID="56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	Oct 17 20:25:27 ha-858120 kubelet[797]: E1017 20:25:27.935263     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-858120\" already exists" pod="kube-system/etcd-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.935307     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.974898     797 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.006913     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-858120\" already exists" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.007149     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.035243     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-858120\" already exists" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.083154     797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-858120" podStartSLOduration=1.083097255 podStartE2EDuration="1.083097255s" podCreationTimestamp="2025-10-17 20:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:25:28.06330839 +0000 UTC m=+31.423690073" watchObservedRunningTime="2025-10-17 20:25:28.083097255 +0000 UTC m=+31.443478937"
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.161072     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d WatchSource:0}: Error finding container b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d: Status 404 returned error can't find the container with id b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.182019     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b WatchSource:0}: Error finding container d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b: Status 404 returned error can't find the container with id d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.276364     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f WatchSource:0}: Error finding container 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f: Status 404 returned error can't find the container with id 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.819931     797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8957fe84f5b782b1a91a47b00072c3" path="/var/lib/kubelet/pods/fc8957fe84f5b782b1a91a47b00072c3/volumes"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.778662     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.778732     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.779364     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.779405     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist"
	Oct 17 20:25:59 ha-858120 kubelet[797]: I1017 20:25:59.061021     797 scope.go:117] "RemoveContainer" containerID="e299f9f677259417858bfdf991397b3ef57a6485f2baf285eaece413087c058b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-858120 -n ha-858120
helpers_test.go:269: (dbg) Run:  kubectl --context ha-858120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-twgcq
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq
helpers_test.go:290: (dbg) kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-twgcq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b2h7l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b2h7l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  116s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  116s  default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  8s    default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (8.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-858120" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-858120\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-858120\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.1\",\"ClusterName\":\"ha-858120\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.1\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\
"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"Sta
ticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-858120
helpers_test.go:243: (dbg) docker inspect ha-858120:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	        "Created": "2025-10-17T20:18:20.77215583Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 633308,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:24:49.89013736Z",
	            "FinishedAt": "2025-10-17T20:24:49.310249081Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hostname",
	        "HostsPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/hosts",
	        "LogPath": "/var/lib/docker/containers/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196-json.log",
	        "Name": "/ha-858120",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-858120:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-858120",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196",
	                "LowerDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3df9b10cbc2e86a3b90d74c274fde9fc64c57cfdbc3a3c90d17d1d24d4ec86b6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-858120",
	                "Source": "/var/lib/docker/volumes/ha-858120/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-858120",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-858120",
	                "name.minikube.sigs.k8s.io": "ha-858120",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30368a165299690b2c1e64ba7fbd000063595e2b8330a6a0386fe8ae84472e14",
	            "SandboxKey": "/var/run/docker/netns/30368a165299",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33552"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33553"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33556"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33554"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33555"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-858120": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:7a:f4:71:ea:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a78c784685bd8e7296863536d4a6677a78ffb6c83e55d8ef3ae48685090ce7d1",
	                    "EndpointID": "2f35a70319780f77f7bb419c5c8b2a8ea449f45b75f1d2c0d0564b394c3bec61",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-858120",
	                        "0886947eb334"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-858120 -n ha-858120
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 logs -n 25: (1.535139229s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp testdata/cp-test.txt ha-858120-m04:/home/docker/cp-test.txt                                                             │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m04.txt │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m04_ha-858120.txt                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120.txt                                                 │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m02 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ cp      │ ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt               │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ ssh     │ ha-858120 ssh -n ha-858120-m03 sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt                                         │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:23 UTC │
	│ node    │ ha-858120 node start m02 --alsologtostderr -v 5                                                                                      │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:23 UTC │ 17 Oct 25 20:24 UTC │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ stop    │ ha-858120 stop --alsologtostderr -v 5                                                                                                │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │ 17 Oct 25 20:24 UTC │
	│ start   │ ha-858120 start --wait true --alsologtostderr -v 5                                                                                   │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:24 UTC │                     │
	│ node    │ ha-858120 node list --alsologtostderr -v 5                                                                                           │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:33 UTC │                     │
	│ node    │ ha-858120 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-858120 │ jenkins │ v1.37.0 │ 17 Oct 25 20:33 UTC │ 17 Oct 25 20:33 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:24:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:24:49.626381  633180 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:24:49.626517  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626528  633180 out.go:374] Setting ErrFile to fd 2...
	I1017 20:24:49.626533  633180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:24:49.626788  633180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:24:49.627220  633180 out.go:368] Setting JSON to false
	I1017 20:24:49.628041  633180 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11236,"bootTime":1760721454,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:24:49.628110  633180 start.go:141] virtualization:  
	I1017 20:24:49.633530  633180 out.go:179] * [ha-858120] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:24:49.636591  633180 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:24:49.636670  633180 notify.go:220] Checking for updates...
	I1017 20:24:49.642574  633180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:24:49.645486  633180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:49.648436  633180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:24:49.651294  633180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:24:49.654188  633180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:24:49.657632  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:49.657777  633180 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:24:49.688170  633180 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:24:49.688301  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.745303  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.735869738 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.745414  633180 docker.go:318] overlay module found
	I1017 20:24:49.748552  633180 out.go:179] * Using the docker driver based on existing profile
	I1017 20:24:49.751497  633180 start.go:305] selected driver: docker
	I1017 20:24:49.751513  633180 start.go:925] validating driver "docker" against &{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.751702  633180 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:24:49.751804  633180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:24:49.806673  633180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-10-17 20:24:49.798122578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:24:49.807082  633180 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:24:49.807153  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:49.807223  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:49.807278  633180 start.go:349] cluster config:
	{Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:49.810596  633180 out.go:179] * Starting "ha-858120" primary control-plane node in "ha-858120" cluster
	I1017 20:24:49.813288  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:49.816087  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:49.818802  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:49.818879  633180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:24:49.818889  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:49.818892  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:49.819084  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:49.819096  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:49.819258  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:49.838368  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:49.838387  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:49.838401  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:49.838423  633180 start.go:360] acquireMachinesLock for ha-858120: {Name:mk62278368bd1da921b0ccf6844a662f4fa595df Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:49.838475  633180 start.go:364] duration metric: took 34.511µs to acquireMachinesLock for "ha-858120"
	I1017 20:24:49.838494  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:49.838499  633180 fix.go:54] fixHost starting: 
	I1017 20:24:49.838762  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:49.855336  633180 fix.go:112] recreateIfNeeded on ha-858120: state=Stopped err=<nil>
	W1017 20:24:49.855369  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:49.858630  633180 out.go:252] * Restarting existing docker container for "ha-858120" ...
	I1017 20:24:49.858710  633180 cli_runner.go:164] Run: docker start ha-858120
	I1017 20:24:50.114094  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:24:50.136057  633180 kic.go:430] container "ha-858120" state is running.
	I1017 20:24:50.136454  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:50.160255  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:50.160500  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:50.160583  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:50.184023  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:50.184342  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:50.184352  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:50.185019  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40052->127.0.0.1:33552: read: connection reset by peer
	I1017 20:24:53.330671  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.330705  633180 ubuntu.go:182] provisioning hostname "ha-858120"
	I1017 20:24:53.330778  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.348402  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.348733  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.348751  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120 && echo "ha-858120" | sudo tee /etc/hostname
	I1017 20:24:53.508835  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120
	
	I1017 20:24:53.508970  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:53.526510  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:53.526830  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:53.526846  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:24:53.671383  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:24:53.671409  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:24:53.671452  633180 ubuntu.go:190] setting up certificates
	I1017 20:24:53.671461  633180 provision.go:84] configureAuth start
	I1017 20:24:53.671530  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:53.689159  633180 provision.go:143] copyHostCerts
	I1017 20:24:53.689210  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689244  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:24:53.689256  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:24:53.689334  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:24:53.689461  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689496  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:24:53.689506  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:24:53.689536  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:24:53.689582  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689603  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:24:53.689611  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:24:53.689635  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:24:53.689684  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120 san=[127.0.0.1 192.168.49.2 ha-858120 localhost minikube]
	I1017 20:24:54.151535  633180 provision.go:177] copyRemoteCerts
	I1017 20:24:54.151620  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:24:54.151667  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.170207  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.274864  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:24:54.274925  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:24:54.292724  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:24:54.292785  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I1017 20:24:54.311391  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:24:54.311452  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:24:54.329407  633180 provision.go:87] duration metric: took 657.913595ms to configureAuth
	I1017 20:24:54.329435  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:24:54.329671  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:54.329775  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.347176  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:54.347484  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33552 <nil> <nil>}
	I1017 20:24:54.347504  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:24:54.678767  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:24:54.678791  633180 machine.go:96] duration metric: took 4.518274151s to provisionDockerMachine
	I1017 20:24:54.678802  633180 start.go:293] postStartSetup for "ha-858120" (driver="docker")
	I1017 20:24:54.678813  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:24:54.678876  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:24:54.678922  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.699409  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.802879  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:24:54.806060  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:24:54.806088  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:24:54.806100  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:24:54.806152  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:24:54.806232  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:24:54.806239  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:24:54.806342  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:24:54.813547  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:54.830587  633180 start.go:296] duration metric: took 151.77042ms for postStartSetup
	I1017 20:24:54.830688  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:24:54.830734  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.847827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:54.948374  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:24:54.953275  633180 fix.go:56] duration metric: took 5.114768478s for fixHost
	I1017 20:24:54.953301  633180 start.go:83] releasing machines lock for "ha-858120", held for 5.114818193s
	I1017 20:24:54.953368  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:24:54.969761  633180 ssh_runner.go:195] Run: cat /version.json
	I1017 20:24:54.969816  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.970081  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:24:54.970130  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:24:54.994236  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.003341  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:24:55.198024  633180 ssh_runner.go:195] Run: systemctl --version
	I1017 20:24:55.204628  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:24:55.242919  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:24:55.247648  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:24:55.247728  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:24:55.255380  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:24:55.255403  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:24:55.255433  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:24:55.255479  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:24:55.270476  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:24:55.283296  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:24:55.283382  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:24:55.298839  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:24:55.311724  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:24:55.424434  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:24:55.537289  633180 docker.go:234] disabling docker service ...
	I1017 20:24:55.537361  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:24:55.553026  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:24:55.566351  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:24:55.681250  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:24:55.798405  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:24:55.811378  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:24:55.825585  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:24:55.825661  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.834063  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:24:55.834172  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.843151  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.851611  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.860130  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:24:55.867797  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.876324  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.884581  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:24:55.892952  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:24:55.900323  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:24:55.907965  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.021101  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:24:56.158831  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:24:56.158928  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:24:56.162776  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:24:56.162859  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:24:56.166390  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:24:56.192830  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:24:56.192972  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.221409  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:24:56.254422  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:24:56.257178  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:24:56.271792  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:24:56.275653  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.285727  633180 kubeadm.go:883] updating cluster {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 20:24:56.285880  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:56.285942  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.320941  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.320965  633180 crio.go:433] Images already preloaded, skipping extraction
	I1017 20:24:56.321020  633180 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 20:24:56.345716  633180 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 20:24:56.345741  633180 cache_images.go:85] Images are preloaded, skipping loading
	I1017 20:24:56.345750  633180 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1017 20:24:56.345858  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:24:56.345940  633180 ssh_runner.go:195] Run: crio config
	I1017 20:24:56.409511  633180 cni.go:84] Creating CNI manager for ""
	I1017 20:24:56.409542  633180 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1017 20:24:56.409567  633180 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 20:24:56.409589  633180 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-858120 NodeName:ha-858120 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 20:24:56.410072  633180 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-858120"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 20:24:56.410096  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:24:56.410163  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:24:56.425787  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:56.425947  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:24:56.426028  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:24:56.433575  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:24:56.433642  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1017 20:24:56.441456  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1017 20:24:56.453796  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:24:56.466376  633180 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1017 20:24:56.480780  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:24:56.493351  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:24:56.497083  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:24:56.507006  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:24:56.614355  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:24:56.631138  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.2
	I1017 20:24:56.631170  633180 certs.go:195] generating shared ca certs ...
	I1017 20:24:56.631205  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:56.631352  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:24:56.631435  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:24:56.631448  633180 certs.go:257] generating profile certs ...
	I1017 20:24:56.631532  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:24:56.631567  633180 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f
	I1017 20:24:56.631581  633180 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1017 20:24:57.260314  633180 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f ...
	I1017 20:24:57.260390  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f: {Name:mk0eeb82ef1c3e333bd14f384361a665d81ea399 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260624  633180 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f ...
	I1017 20:24:57.260661  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f: {Name:mkd9170cb1ed384cce4c4204f35083d5972d0281 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:57.260803  633180 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt
	I1017 20:24:57.260987  633180 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.d0f30c0f -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key
	I1017 20:24:57.261179  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:24:57.261215  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:24:57.261249  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:24:57.261296  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:24:57.261335  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:24:57.261369  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:24:57.261415  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:24:57.261450  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:24:57.261591  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:24:57.261674  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:24:57.261740  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:24:57.261777  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:24:57.261824  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:24:57.261878  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:24:57.261950  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:24:57.262030  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:24:57.262099  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.262148  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.262186  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.262769  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:24:57.292641  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:24:57.324994  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:24:57.350011  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:24:57.393934  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:24:57.425087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:24:57.476207  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:24:57.521477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:24:57.553659  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:24:57.581891  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:24:57.616931  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:24:57.653395  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 20:24:57.676685  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:24:57.687849  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:24:57.697063  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701415  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.701527  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:24:57.748713  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:24:57.761692  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:24:57.778101  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782605  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.782719  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:24:57.851750  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:24:57.860250  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:24:57.872947  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877259  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.877426  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:24:57.935424  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:24:57.948490  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:24:57.952867  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:24:58.010016  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:24:58.063976  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:24:58.108039  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:24:58.150227  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:24:58.194750  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:24:58.245919  633180 kubeadm.go:400] StartCluster: {Name:ha-858120 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:24:58.246100  633180 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 20:24:58.246199  633180 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 20:24:58.291268  633180 cri.go:89] found id: "ee8a159707f901bec7d65f64a977c75fa75282a553082688f13964bab6bed5f2"
	I1017 20:24:58.291334  633180 cri.go:89] found id: "62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1"
	I1017 20:24:58.291353  633180 cri.go:89] found id: "09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b"
	I1017 20:24:58.291371  633180 cri.go:89] found id: "56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	I1017 20:24:58.291391  633180 cri.go:89] found id: "7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6"
	I1017 20:24:58.291421  633180 cri.go:89] found id: ""
	I1017 20:24:58.291493  633180 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 20:24:58.311475  633180 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:24:58Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:24:58.311623  633180 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 20:24:58.320631  633180 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 20:24:58.320702  633180 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 20:24:58.320786  633180 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 20:24:58.333311  633180 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:24:58.333829  633180 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-858120" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.333984  633180 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "ha-858120" cluster setting kubeconfig missing "ha-858120" context setting]
	I1017 20:24:58.334333  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.334925  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1017 20:24:58.335797  633180 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1017 20:24:58.335856  633180 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1017 20:24:58.335916  633180 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1017 20:24:58.335942  633180 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1017 20:24:58.335963  633180 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1017 20:24:58.335987  633180 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1017 20:24:58.336351  633180 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 20:24:58.349523  633180 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.49.2
	I1017 20:24:58.349592  633180 kubeadm.go:601] duration metric: took 28.869563ms to restartPrimaryControlPlane
	I1017 20:24:58.349615  633180 kubeadm.go:402] duration metric: took 103.705091ms to StartCluster
	I1017 20:24:58.349647  633180 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.349744  633180 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:24:58.350418  633180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:24:58.350679  633180 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:24:58.350724  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:24:58.350749  633180 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 20:24:58.351348  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.356477  633180 out.go:179] * Enabled addons: 
	I1017 20:24:58.359610  633180 addons.go:514] duration metric: took 8.847324ms for enable addons: enabled=[]
	I1017 20:24:58.359682  633180 start.go:246] waiting for cluster config update ...
	I1017 20:24:58.359707  633180 start.go:255] writing updated cluster config ...
	I1017 20:24:58.363052  633180 out.go:203] 
	I1017 20:24:58.366186  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:24:58.366342  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.369685  633180 out.go:179] * Starting "ha-858120-m02" control-plane node in "ha-858120" cluster
	I1017 20:24:58.372589  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:24:58.375487  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:24:58.378319  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:24:58.378348  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:24:58.378444  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:24:58.378455  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:24:58.378576  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.378776  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:24:58.404390  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:24:58.404414  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:24:58.404426  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:24:58.404451  633180 start.go:360] acquireMachinesLock for ha-858120-m02: {Name:mk29f876727465da439698dbf4948f688d19b698 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:24:58.404504  633180 start.go:364] duration metric: took 36.981µs to acquireMachinesLock for "ha-858120-m02"
	I1017 20:24:58.404523  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:24:58.404529  633180 fix.go:54] fixHost starting: m02
	I1017 20:24:58.404783  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.432805  633180 fix.go:112] recreateIfNeeded on ha-858120-m02: state=Stopped err=<nil>
	W1017 20:24:58.432831  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:24:58.436247  633180 out.go:252] * Restarting existing docker container for "ha-858120-m02" ...
	I1017 20:24:58.436330  633180 cli_runner.go:164] Run: docker start ha-858120-m02
	I1017 20:24:58.871041  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:24:58.895697  633180 kic.go:430] container "ha-858120-m02" state is running.
	I1017 20:24:58.896208  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:24:58.931596  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:24:58.931856  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:24:58.931915  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:24:58.966121  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:24:58.966428  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:24:58.966438  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:24:58.967202  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:57724->127.0.0.1:33557: read: connection reset by peer
	I1017 20:25:02.146984  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.147066  633180 ubuntu.go:182] provisioning hostname "ha-858120-m02"
	I1017 20:25:02.147179  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.180883  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.181193  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.181204  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m02 && echo "ha-858120-m02" | sudo tee /etc/hostname
	I1017 20:25:02.371014  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m02
	
	I1017 20:25:02.371118  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.406904  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:02.407240  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:02.407264  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:02.593559  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:02.593637  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:02.593669  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:02.593708  633180 provision.go:84] configureAuth start
	I1017 20:25:02.593805  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:02.623320  633180 provision.go:143] copyHostCerts
	I1017 20:25:02.623365  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623400  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:02.623407  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:02.623486  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:02.623563  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623580  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:02.623584  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:02.623609  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:02.623646  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623662  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:02.623666  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:02.623694  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:02.623738  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m02 san=[127.0.0.1 192.168.49.3 ha-858120-m02 localhost minikube]
	I1017 20:25:02.747705  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:02.747782  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:02.747828  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:02.766757  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:02.880520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:02.880580  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:02.906371  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:02.906496  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:02.945019  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:02.945087  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 20:25:02.987301  633180 provision.go:87] duration metric: took 393.559503ms to configureAuth
	I1017 20:25:02.987344  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:02.987585  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:02.987711  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.018499  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:03.018813  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33557 <nil> <nil>}
	I1017 20:25:03.018831  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:03.435808  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:03.435834  633180 machine.go:96] duration metric: took 4.503969223s to provisionDockerMachine
	I1017 20:25:03.435844  633180 start.go:293] postStartSetup for "ha-858120-m02" (driver="docker")
	I1017 20:25:03.435855  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:03.435916  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:03.435964  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.455906  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.562871  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:03.566432  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:03.566502  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:03.566518  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:03.566584  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:03.566666  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:03.566676  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:03.566778  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:03.574445  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:03.599633  633180 start.go:296] duration metric: took 163.773711ms for postStartSetup
	I1017 20:25:03.599729  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:03.599785  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.627245  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.741852  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:03.758675  633180 fix.go:56] duration metric: took 5.354138506s for fixHost
	I1017 20:25:03.758698  633180 start.go:83] releasing machines lock for "ha-858120-m02", held for 5.354185538s
	I1017 20:25:03.758773  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m02
	I1017 20:25:03.786714  633180 out.go:179] * Found network options:
	I1017 20:25:03.789819  633180 out.go:179]   - NO_PROXY=192.168.49.2
	W1017 20:25:03.793065  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:03.793118  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:03.793187  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:03.793246  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.793459  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:03.793525  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m02
	I1017 20:25:03.843024  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:03.846827  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33557 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m02/id_rsa Username:docker}
	I1017 20:25:04.116601  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:04.182522  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:04.182658  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:04.199347  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:04.199411  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:04.199459  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:04.199536  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:04.224421  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:04.246523  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:04.246695  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:04.274907  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:04.293080  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:04.507388  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:04.744373  633180 docker.go:234] disabling docker service ...
	I1017 20:25:04.744489  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:04.763912  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:04.778471  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:04.999181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:05.212501  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:05.227293  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:05.243392  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:05.243504  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.253121  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:05.253268  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.262917  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.272790  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.282153  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:05.291008  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.300670  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.310655  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:05.320320  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:05.328861  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:05.337217  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:05.542704  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:25:05.766295  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:25:05.766406  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:25:05.770528  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:25:05.770594  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:25:05.774319  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:25:05.802224  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:25:05.802316  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.832543  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:25:05.868559  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:25:05.871619  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:25:05.874677  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:25:05.891324  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:25:05.895481  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:05.906398  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:25:05.906643  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:05.906915  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:25:05.924891  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:25:05.925180  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.3
	I1017 20:25:05.925188  633180 certs.go:195] generating shared ca certs ...
	I1017 20:25:05.925202  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:25:05.925333  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:25:05.925371  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:25:05.925378  633180 certs.go:257] generating profile certs ...
	I1017 20:25:05.925461  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:25:05.925516  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.75ce5734
	I1017 20:25:05.925554  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:25:05.925562  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:25:05.925574  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:25:05.925587  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:25:05.925602  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:25:05.925612  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:25:05.925624  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:25:05.925635  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:25:05.925645  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:25:05.925695  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:25:05.925722  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:25:05.925731  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:25:05.925756  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:25:05.925779  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:25:05.925801  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:25:05.925843  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:05.925869  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:05.925885  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:25:05.925895  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:25:05.925947  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:25:05.942775  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:25:06.039567  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:25:06.043552  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:25:06.051886  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:25:06.055650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:25:06.071273  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:25:06.074980  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:25:06.084033  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:25:06.087747  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:25:06.095897  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:25:06.099650  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:25:06.109034  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:25:06.112875  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:25:06.121486  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:25:06.140459  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:25:06.159242  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:25:06.177880  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:25:06.196379  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:25:06.214366  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:25:06.232392  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:25:06.250082  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:25:06.268477  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:25:06.287023  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:25:06.306305  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:25:06.325727  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:25:06.339132  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:25:06.351861  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:25:06.364957  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:25:06.378148  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:25:06.391750  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:25:06.405157  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:25:06.418865  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:25:06.425313  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:25:06.433695  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437626  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.437740  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:25:06.479551  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:25:06.487333  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:25:06.495467  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.498961  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.499069  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:25:06.541081  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:25:06.549258  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:25:06.557861  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.561976  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.562057  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:25:06.604418  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:25:06.612470  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:25:06.616274  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:25:06.657319  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:25:06.701813  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:25:06.745127  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:25:06.787373  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:25:06.830322  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:25:06.871900  633180 kubeadm.go:934] updating node {m02 192.168.49.3 8443 v1.34.1 crio true true} ...
	I1017 20:25:06.872035  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:25:06.872065  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:25:06.872127  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:25:06.885270  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:25:06.885337  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:25:06.885400  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:25:06.893245  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:25:06.893321  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:25:06.901109  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:25:06.914333  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:25:06.927147  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:25:06.941387  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:25:06.945076  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:25:06.954881  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.078941  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.093624  633180 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:25:07.094028  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:07.097836  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:25:07.100837  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:07.224505  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:25:07.238770  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:25:07.238907  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:25:07.239230  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m02" to be "Ready" ...
	W1017 20:25:17.242440  633180 node_ready.go:55] error getting node "ha-858120-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02": net/http: TLS handshake timeout
	I1017 20:25:20.808419  633180 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-858120-m02"
	I1017 20:25:27.596126  633180 node_ready.go:49] node "ha-858120-m02" is "Ready"
	I1017 20:25:27.596154  633180 node_ready.go:38] duration metric: took 20.356898962s for node "ha-858120-m02" to be "Ready" ...
	I1017 20:25:27.596166  633180 api_server.go:52] waiting for apiserver process to appear ...
	I1017 20:25:27.596229  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.096580  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:28.597221  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.097036  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:29.596474  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.096742  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.596355  633180 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:25:30.621450  633180 api_server.go:72] duration metric: took 23.527778082s to wait for apiserver process to appear ...
	I1017 20:25:30.621472  633180 api_server.go:88] waiting for apiserver healthz status ...
	I1017 20:25:30.621491  633180 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1017 20:25:30.643810  633180 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1017 20:25:30.645148  633180 api_server.go:141] control plane version: v1.34.1
	I1017 20:25:30.645172  633180 api_server.go:131] duration metric: took 23.693241ms to wait for apiserver health ...
	I1017 20:25:30.645181  633180 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 20:25:30.668363  633180 system_pods.go:59] 26 kube-system pods found
	I1017 20:25:30.668458  633180 system_pods.go:61] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668491  633180 system_pods.go:61] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.668536  633180 system_pods.go:61] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.668570  633180 system_pods.go:61] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.668614  633180 system_pods.go:61] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.668638  633180 system_pods.go:61] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.668658  633180 system_pods.go:61] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.668690  633180 system_pods.go:61] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.668714  633180 system_pods.go:61] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.668741  633180 system_pods.go:61] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.668778  633180 system_pods.go:61] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.668811  633180 system_pods.go:61] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.668837  633180 system_pods.go:61] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.668879  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.668901  633180 system_pods.go:61] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.668934  633180 system_pods.go:61] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.668958  633180 system_pods.go:61] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.668978  633180 system_pods.go:61] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.669017  633180 system_pods.go:61] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.669041  633180 system_pods.go:61] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.669067  633180 system_pods.go:61] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.669101  633180 system_pods.go:61] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.669127  633180 system_pods.go:61] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.669157  633180 system_pods.go:61] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.669188  633180 system_pods.go:61] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.669214  633180 system_pods.go:61] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.669236  633180 system_pods.go:74] duration metric: took 24.048955ms to wait for pod list to return data ...
	I1017 20:25:30.669273  633180 default_sa.go:34] waiting for default service account to be created ...
	I1017 20:25:30.687415  633180 default_sa.go:45] found service account: "default"
	I1017 20:25:30.687489  633180 default_sa.go:55] duration metric: took 18.191795ms for default service account to be created ...
	I1017 20:25:30.687514  633180 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 20:25:30.762042  633180 system_pods.go:86] 26 kube-system pods found
	I1017 20:25:30.762148  633180 system_pods.go:89] "coredns-66bc5c9577-hc5rq" [5d2c0566-0dab-4b95-b730-e11a0527dc77] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762181  633180 system_pods.go:89] "coredns-66bc5c9577-zfbms" [16d9f186-7601-485c-ad65-2640489fe6f4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 20:25:30.762224  633180 system_pods.go:89] "etcd-ha-858120" [db2639fd-6c88-4161-9a22-0ac10b2ab920] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 20:25:30.762250  633180 system_pods.go:89] "etcd-ha-858120-m02" [ee533c69-9c33-401a-a234-ba29a3dae2c0] Running
	I1017 20:25:30.762273  633180 system_pods.go:89] "etcd-ha-858120-m03" [274a4cb6-87a0-4b98-95a9-a38589c18c68] Running
	I1017 20:25:30.762307  633180 system_pods.go:89] "kindnet-7bwxv" [e536242d-87e2-4125-90c6-b8b7ce5c72cc] Running
	I1017 20:25:30.762330  633180 system_pods.go:89] "kindnet-jl4tq" [8d3b5f58-58cf-498b-a4f1-4b395857c3de] Running
	I1017 20:25:30.762352  633180 system_pods.go:89] "kindnet-mk8st" [397183fa-e683-45a8-a7ef-a0ded0dd0816] Running
	I1017 20:25:30.762387  633180 system_pods.go:89] "kindnet-n44c4" [a6b950ac-0821-48bb-b4f4-27c867af408f] Running
	I1017 20:25:30.762413  633180 system_pods.go:89] "kube-apiserver-ha-858120" [078fa8e7-03d8-445e-91d1-c10b57a0ce8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 20:25:30.762435  633180 system_pods.go:89] "kube-apiserver-ha-858120-m02" [e50ce8f9-b14c-4d62-9d60-2c2195865d30] Running
	I1017 20:25:30.762469  633180 system_pods.go:89] "kube-apiserver-ha-858120-m03" [81e1abc7-8648-48c3-a7e0-87ba9afbc0d8] Running
	I1017 20:25:30.762497  633180 system_pods.go:89] "kube-controller-manager-ha-858120" [73d16d85-4687-4a18-bf68-220fdc8015dd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 20:25:30.762517  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m02" [0efb7ace-b738-4510-9e40-f70774bea3f9] Running
	I1017 20:25:30.762554  633180 system_pods.go:89] "kube-controller-manager-ha-858120-m03" [394acd20-181d-4ae2-9a04-0a6ab6c87165] Running
	I1017 20:25:30.762578  633180 system_pods.go:89] "kube-proxy-52dzj" [1324e014-7923-440e-91f3-e28c0fb749ca] Running
	I1017 20:25:30.762599  633180 system_pods.go:89] "kube-proxy-5qtb8" [e90d8e22-6ca9-4541-960c-4ecc95a31d5f] Running
	I1017 20:25:30.762635  633180 system_pods.go:89] "kube-proxy-cn926" [fa32c08b-56da-4395-b517-24b49088e6a0] Running
	I1017 20:25:30.762662  633180 system_pods.go:89] "kube-proxy-wzlp2" [6376f853-7135-4859-b0fa-7940dd9d0273] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 20:25:30.762684  633180 system_pods.go:89] "kube-scheduler-ha-858120" [3a6b1803-259c-4a75-943d-6cfa195e37ba] Running
	I1017 20:25:30.762717  633180 system_pods.go:89] "kube-scheduler-ha-858120-m02" [c42db8fa-e3b4-4ce1-9a08-186361f845b9] Running
	I1017 20:25:30.762741  633180 system_pods.go:89] "kube-scheduler-ha-858120-m03" [7b61598b-0e4c-46ac-9808-331b2265e9bf] Running
	I1017 20:25:30.762760  633180 system_pods.go:89] "kube-vip-ha-858120" [415ce87d-23b5-4f2f-94cd-4cdbd29ad048] Running
	I1017 20:25:30.762794  633180 system_pods.go:89] "kube-vip-ha-858120-m02" [3c808e8f-fa62-4120-b853-0b6dd7b6e81a] Running
	I1017 20:25:30.762816  633180 system_pods.go:89] "kube-vip-ha-858120-m03" [f9389bd4-247b-4e63-a621-cc93ceddc7b3] Running
	I1017 20:25:30.762834  633180 system_pods.go:89] "storage-provisioner" [f9e9dfd7-e90a-4da3-969d-2669daa3d123] Running
	I1017 20:25:30.762855  633180 system_pods.go:126] duration metric: took 75.322066ms to wait for k8s-apps to be running ...
	I1017 20:25:30.762895  633180 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 20:25:30.762983  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:25:30.798874  633180 system_svc.go:56] duration metric: took 35.957427ms WaitForService to wait for kubelet
	I1017 20:25:30.798951  633180 kubeadm.go:586] duration metric: took 23.705274367s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 20:25:30.798985  633180 node_conditions.go:102] verifying NodePressure condition ...
	I1017 20:25:30.805472  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805553  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805580  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805600  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805635  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805661  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805684  633180 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 20:25:30.805722  633180 node_conditions.go:123] node cpu capacity is 2
	I1017 20:25:30.805746  633180 node_conditions.go:105] duration metric: took 6.741948ms to run NodePressure ...
	I1017 20:25:30.805773  633180 start.go:241] waiting for startup goroutines ...
	I1017 20:25:30.805824  633180 start.go:255] writing updated cluster config ...
	I1017 20:25:30.809328  633180 out.go:203] 
	I1017 20:25:30.812477  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:30.812660  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.816059  633180 out.go:179] * Starting "ha-858120-m03" control-plane node in "ha-858120" cluster
	I1017 20:25:30.819758  633180 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:25:30.822780  633180 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:25:30.825590  633180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:25:30.825654  633180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:25:30.825902  633180 cache.go:58] Caching tarball of preloaded images
	I1017 20:25:30.826027  633180 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:25:30.826092  633180 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:25:30.826241  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:30.865897  633180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:25:30.865917  633180 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:25:30.865932  633180 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:25:30.865956  633180 start.go:360] acquireMachinesLock for ha-858120-m03: {Name:mk0745e738c38fcaad2c00b3d5938ec5b18bc19a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:25:30.866008  633180 start.go:364] duration metric: took 36.481µs to acquireMachinesLock for "ha-858120-m03"
	I1017 20:25:30.866027  633180 start.go:96] Skipping create...Using existing machine configuration
	I1017 20:25:30.866033  633180 fix.go:54] fixHost starting: m03
	I1017 20:25:30.866284  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:30.902472  633180 fix.go:112] recreateIfNeeded on ha-858120-m03: state=Stopped err=<nil>
	W1017 20:25:30.902498  633180 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 20:25:30.906012  633180 out.go:252] * Restarting existing docker container for "ha-858120-m03" ...
	I1017 20:25:30.906100  633180 cli_runner.go:164] Run: docker start ha-858120-m03
	I1017 20:25:31.385666  633180 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:25:31.416798  633180 kic.go:430] container "ha-858120-m03" state is running.
	I1017 20:25:31.417186  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:31.445988  633180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/config.json ...
	I1017 20:25:31.446246  633180 machine.go:93] provisionDockerMachine start ...
	I1017 20:25:31.446327  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:31.476234  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:31.476543  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:31.476558  633180 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 20:25:31.477171  633180 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 20:25:34.759062  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:34.759091  633180 ubuntu.go:182] provisioning hostname "ha-858120-m03"
	I1017 20:25:34.759181  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:34.785061  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:34.785366  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:34.785384  633180 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-858120-m03 && echo "ha-858120-m03" | sudo tee /etc/hostname
	I1017 20:25:35.026879  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-858120-m03
	
	I1017 20:25:35.027037  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.055472  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:35.055775  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:35.055791  633180 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-858120-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-858120-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-858120-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 20:25:35.277230  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 20:25:35.277256  633180 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 20:25:35.277273  633180 ubuntu.go:190] setting up certificates
	I1017 20:25:35.277283  633180 provision.go:84] configureAuth start
	I1017 20:25:35.277348  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:35.311355  633180 provision.go:143] copyHostCerts
	I1017 20:25:35.311397  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311430  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 20:25:35.311438  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 20:25:35.311519  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 20:25:35.311605  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311621  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 20:25:35.311626  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 20:25:35.311652  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 20:25:35.311691  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311709  633180 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 20:25:35.311713  633180 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 20:25:35.311737  633180 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 20:25:35.311782  633180 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.ha-858120-m03 san=[127.0.0.1 192.168.49.4 ha-858120-m03 localhost minikube]
	I1017 20:25:35.867211  633180 provision.go:177] copyRemoteCerts
	I1017 20:25:35.867305  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 20:25:35.867370  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:35.885861  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:36.014744  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1017 20:25:36.014818  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 20:25:36.078628  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1017 20:25:36.078695  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1017 20:25:36.159581  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1017 20:25:36.159683  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 20:25:36.221533  633180 provision.go:87] duration metric: took 944.235432ms to configureAuth
	I1017 20:25:36.221570  633180 ubuntu.go:206] setting minikube options for container-runtime
	I1017 20:25:36.221864  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:25:36.222030  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:36.252315  633180 main.go:141] libmachine: Using SSH client type: native
	I1017 20:25:36.252618  633180 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33562 <nil> <nil>}
	I1017 20:25:36.252633  633180 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 20:25:37.901354  633180 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 20:25:37.901379  633180 machine.go:96] duration metric: took 6.455113021s to provisionDockerMachine
	I1017 20:25:37.901397  633180 start.go:293] postStartSetup for "ha-858120-m03" (driver="docker")
	I1017 20:25:37.901423  633180 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 20:25:37.901507  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 20:25:37.901580  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:37.931348  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.063033  633180 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 20:25:38.067834  633180 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 20:25:38.067869  633180 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 20:25:38.067882  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 20:25:38.067943  633180 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 20:25:38.068028  633180 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 20:25:38.068035  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /etc/ssl/certs/5861722.pem
	I1017 20:25:38.068144  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 20:25:38.080413  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:25:38.111750  633180 start.go:296] duration metric: took 210.321276ms for postStartSetup
	I1017 20:25:38.111848  633180 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:25:38.111903  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.139479  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.252206  633180 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 20:25:38.257552  633180 fix.go:56] duration metric: took 7.391512723s for fixHost
	I1017 20:25:38.257574  633180 start.go:83] releasing machines lock for "ha-858120-m03", held for 7.39155818s
	I1017 20:25:38.257643  633180 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:25:38.279335  633180 out.go:179] * Found network options:
	I1017 20:25:38.282289  633180 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1017 20:25:38.285193  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285225  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285250  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	W1017 20:25:38.285261  633180 proxy.go:120] fail to check proxy env: Error ip not in block
	I1017 20:25:38.285342  633180 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 20:25:38.285383  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.285405  633180 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 20:25:38.285456  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:25:38.309400  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.319419  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33562 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:25:38.495206  633180 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 20:25:38.620333  633180 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 20:25:38.620409  633180 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 20:25:38.635710  633180 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 20:25:38.635735  633180 start.go:495] detecting cgroup driver to use...
	I1017 20:25:38.635766  633180 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 20:25:38.635815  633180 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 20:25:38.658258  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 20:25:38.677709  633180 docker.go:218] disabling cri-docker service (if available) ...
	I1017 20:25:38.677780  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 20:25:38.695381  633180 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 20:25:38.718728  633180 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 20:25:38.983870  633180 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 20:25:39.232982  633180 docker.go:234] disabling docker service ...
	I1017 20:25:39.233056  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 20:25:39.251900  633180 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 20:25:39.268736  633180 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 20:25:39.513181  633180 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 20:25:39.774360  633180 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 20:25:39.795448  633180 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 20:25:39.819737  633180 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 20:25:39.819803  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.835507  633180 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 20:25:39.835578  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.848330  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.863809  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.873655  633180 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 20:25:39.886248  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.899031  633180 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.910745  633180 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 20:25:39.923167  633180 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 20:25:39.945269  633180 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 20:25:39.956015  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:25:40.185598  633180 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 20:27:10.577185  633180 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.391479788s)
	I1017 20:27:10.577210  633180 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 20:27:10.577270  633180 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 20:27:10.581599  633180 start.go:563] Will wait 60s for crictl version
	I1017 20:27:10.581663  633180 ssh_runner.go:195] Run: which crictl
	I1017 20:27:10.586217  633180 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 20:27:10.618110  633180 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 20:27:10.618197  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.657726  633180 ssh_runner.go:195] Run: crio --version
	I1017 20:27:10.690017  633180 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 20:27:10.692996  633180 out.go:179]   - env NO_PROXY=192.168.49.2
	I1017 20:27:10.695853  633180 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1017 20:27:10.698743  633180 cli_runner.go:164] Run: docker network inspect ha-858120 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 20:27:10.717568  633180 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1017 20:27:10.721686  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:10.732598  633180 mustload.go:65] Loading cluster: ha-858120
	I1017 20:27:10.732855  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:10.733110  633180 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:27:10.755756  633180 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:27:10.756043  633180 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120 for IP: 192.168.49.4
	I1017 20:27:10.756057  633180 certs.go:195] generating shared ca certs ...
	I1017 20:27:10.756073  633180 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:27:10.756206  633180 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 20:27:10.756249  633180 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 20:27:10.756259  633180 certs.go:257] generating profile certs ...
	I1017 20:27:10.756334  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key
	I1017 20:27:10.756400  633180 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key.daaf2b71
	I1017 20:27:10.756443  633180 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key
	I1017 20:27:10.756456  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1017 20:27:10.756468  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1017 20:27:10.756484  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1017 20:27:10.756494  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1017 20:27:10.756505  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1017 20:27:10.756520  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1017 20:27:10.756531  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1017 20:27:10.756545  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1017 20:27:10.756595  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 20:27:10.756627  633180 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 20:27:10.756639  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 20:27:10.756664  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 20:27:10.756689  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 20:27:10.756714  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 20:27:10.756760  633180 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 20:27:10.756791  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:10.756807  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem -> /usr/share/ca-certificates/586172.pem
	I1017 20:27:10.756818  633180 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> /usr/share/ca-certificates/5861722.pem
	I1017 20:27:10.756875  633180 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:27:10.776286  633180 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33552 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:27:10.875440  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1017 20:27:10.879271  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1017 20:27:10.887346  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1017 20:27:10.890991  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1017 20:27:10.899445  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1017 20:27:10.902677  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1017 20:27:10.910747  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1017 20:27:10.914609  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1017 20:27:10.923275  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1017 20:27:10.927331  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1017 20:27:10.937614  633180 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1017 20:27:10.941051  633180 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1017 20:27:10.949375  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 20:27:10.970388  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 20:27:10.989978  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 20:27:11.024313  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 20:27:11.045252  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 20:27:11.067969  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 20:27:11.093977  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 20:27:11.116400  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 20:27:11.143991  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 20:27:11.165234  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 20:27:11.186154  633180 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 20:27:11.204999  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1017 20:27:11.217584  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1017 20:27:11.231184  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1017 20:27:11.245544  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1017 20:27:11.258825  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1017 20:27:11.273380  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1017 20:27:11.288154  633180 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1017 20:27:11.301714  633180 ssh_runner.go:195] Run: openssl version
	I1017 20:27:11.307871  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 20:27:11.316139  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320071  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.320164  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 20:27:11.360582  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 20:27:11.368911  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 20:27:11.386044  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389821  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.389916  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 20:27:11.431364  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 20:27:11.439391  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 20:27:11.448231  633180 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452172  633180 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.452235  633180 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 20:27:11.493408  633180 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 20:27:11.501304  633180 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 20:27:11.505093  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 20:27:11.546404  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 20:27:11.588587  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 20:27:11.629385  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 20:27:11.670643  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 20:27:11.711584  633180 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 20:27:11.752896  633180 kubeadm.go:934] updating node {m03 192.168.49.4 8443 v1.34.1 crio true true} ...
	I1017 20:27:11.752991  633180 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-858120-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:ha-858120 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 20:27:11.753019  633180 kube-vip.go:115] generating kube-vip config ...
	I1017 20:27:11.753080  633180 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1017 20:27:11.765738  633180 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1017 20:27:11.765801  633180 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1017 20:27:11.765864  633180 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 20:27:11.773834  633180 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 20:27:11.773902  633180 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1017 20:27:11.782020  633180 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1017 20:27:11.794989  633180 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 20:27:11.809996  633180 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1017 20:27:11.825247  633180 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1017 20:27:11.828873  633180 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 20:27:11.838796  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:11.986822  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.004552  633180 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:27:12.005009  633180 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:27:12.009913  633180 out.go:179] * Verifying Kubernetes components...
	I1017 20:27:12.012573  633180 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 20:27:12.166504  633180 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 20:27:12.181240  633180 kapi.go:59] client config for ha-858120: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.crt", KeyFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/profiles/ha-858120/client.key", CAFile:"/home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2120300), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1017 20:27:12.181372  633180 kubeadm.go:491] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1017 20:27:12.181669  633180 node_ready.go:35] waiting up to 6m0s for node "ha-858120-m03" to be "Ready" ...
	W1017 20:27:14.185938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:16.186949  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:18.685673  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:20.686393  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:23.185742  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:25.186041  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:27.686171  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:30.186140  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:32.685938  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:34.686362  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:37.189099  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:39.685178  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:41.685898  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:43.686246  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:46.185981  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:48.186022  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:50.685565  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:53.185024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:55.185063  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:57.186756  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:27:59.685967  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:02.185450  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:04.685930  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:07.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:09.185945  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:11.685298  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:13.685825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:16.186173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:18.685675  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:21.185822  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:23.686024  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:25.686653  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:27.688976  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:30.185995  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:32.685998  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:34.686062  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:37.185512  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:39.684946  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:41.685173  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:43.685392  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:45.686411  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:48.185559  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:50.685010  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:52.685699  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:54.685799  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:57.185287  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:28:59.185541  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:01.186445  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:03.685663  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:05.686118  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:08.185421  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:10.185464  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:12.685166  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:14.685776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:16.686147  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:18.686284  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:21.185551  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:23.685297  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:26.185709  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:28.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:30.186229  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:32.685640  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:34.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:36.685906  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:39.185156  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:41.185196  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:43.185432  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:45.189065  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:47.685980  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:50.185249  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:52.186422  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:54.685912  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:57.185530  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:29:59.185859  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:01.187381  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:03.685399  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:05.685481  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:08.187943  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:10.689106  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:13.185786  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:15.685607  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:17.686048  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:19.686753  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:22.185049  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:24.186071  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:26.685608  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:28.686143  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:31.185273  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:33.186568  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:35.685304  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:37.685459  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:39.685964  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:42.186035  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:44.186982  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:46.685781  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:49.185082  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:51.185419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:53.686212  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:56.185582  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:30:58.185659  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:00.222492  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:02.685725  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:04.686504  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:07.186161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:09.685238  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:11.685865  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:14.185500  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:16.185620  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:18.192262  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:20.686051  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:23.185373  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:25.686121  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:28.187578  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:30.689269  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:33.185825  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:35.686100  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:38.186012  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:40.685515  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:42.685703  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:44.685871  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:47.185764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:49.685433  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:51.685733  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:54.185161  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:56.685619  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:31:59.185113  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:01.185211  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:03.185561  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:05.186288  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:07.685440  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:09.685758  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:12.185776  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:14.185887  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:16.685436  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:19.185337  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:21.686419  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:24.186002  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:26.686017  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:29.185789  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:31.686359  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:34.185117  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:36.185746  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:38.185848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:40.685764  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:43.185924  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:45.186873  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:47.685424  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:49.685760  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:52.185842  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:54.685648  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:57.185264  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:32:59.185532  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:01.186342  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:03.685323  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:05.685848  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:07.686600  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	W1017 20:33:10.185305  633180 node_ready.go:57] node "ha-858120-m03" has "Ready":"Unknown" status (will retry)
	I1017 20:33:12.181809  633180 node_ready.go:38] duration metric: took 6m0.000088857s for node "ha-858120-m03" to be "Ready" ...
	I1017 20:33:12.184950  633180 out.go:203] 
	W1017 20:33:12.187811  633180 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1017 20:33:12.187835  633180 out.go:285] * 
	W1017 20:33:12.189989  633180 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:33:12.193072  633180 out.go:203] 
	
	
	==> CRI-O <==
	Oct 17 20:25:28 ha-858120 crio[661]: time="2025-10-17T20:25:28.449562303Z" level=info msg="Started container" PID=1184 containerID=dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd description=kube-system/coredns-66bc5c9577-hc5rq/coredns id=3b68bae1-e38f-42c1-bdab-f61b3987b2a5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1860023794c840fe5be850bb22c178acfad4e2cba7c02a3af6ce14acb4379be7
	Oct 17 20:25:59 ha-858120 conmon[1152]: conmon e299f9f677259417858b <ninfo>: container 1163 exited with status 1
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.07219365Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=72b412f9-766c-4334-a938-00c3ec219964 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.076148095Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=30596c8c-f29a-4d19-9d8b-b08ba7b6cf56 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082176481Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.082434036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.09632455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096698857Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/passwd: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.096725565Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/26c3ec84360f7d08697e8833d889ca6d784e2bd57f626cd84a3158219881376f/merged/etc/group: no such file or directory"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.097064885Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.125738481Z" level=info msg="Created container 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008: kube-system/storage-provisioner/storage-provisioner" id=d085a1ef-a71e-4c02-a2c5-4efc228e51a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.129459847Z" level=info msg="Starting container: 5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008" id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:25:59 ha-858120 crio[661]: time="2025-10-17T20:25:59.13991512Z" level=info msg="Started container" PID=1399 containerID=5b5162cc662da211f1c790ce12f24ba9d3d5458276eb7b82079aae366cceb008 description=kube-system/storage-provisioner/storage-provisioner id=c1c58d95-5401-428a-9837-b502f95a9129 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.628460002Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632408867Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.63244616Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.632468453Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645565094Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645599006Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.645616032Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650549978Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650585654Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.650619608Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.654017064Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:26:08 ha-858120 crio[661]: time="2025-10-17T20:26:08.65405073Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5b5162cc662da       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Running             storage-provisioner       3                   b8cc01892712d       storage-provisioner                 kube-system
	dc932b06eb666       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   1860023794c84       coredns-66bc5c9577-hc5rq            kube-system
	f99357006a077       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   7 minutes ago       Running             coredns                   1                   24158395efe09       coredns-66bc5c9577-zfbms            kube-system
	30fbb87d1faca       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 minutes ago       Running             kube-proxy                1                   916eeadf90187       kube-proxy-5qtb8                    kube-system
	53ef170773eb6       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   7 minutes ago       Running             busybox                   1                   d544ca125ccd8       busybox-7b57f96db7-jw7vx            default
	e299f9f677259       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   7 minutes ago       Exited              storage-provisioner       2                   b8cc01892712d       storage-provisioner                 kube-system
	97aba0e5d7c48       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 minutes ago       Running             kindnet-cni               1                   4485a8e917cbe       kindnet-7bwxv                       kube-system
	9ce296c3989a1       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   7 minutes ago       Running             kube-controller-manager   2                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	638256daf481d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Running             kube-apiserver            2                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	ee8a159707f90       2a8917f902489be5a8dd414209c32b77bd644d187ea646d86dbdc31e85efb551   8 minutes ago       Running             kube-vip                  0                   dcbb9d5285b37       kube-vip-ha-858120                  kube-system
	62a0a9e565cbd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   8 minutes ago       Exited              kube-apiserver            1                   c88bd42c2e749       kube-apiserver-ha-858120            kube-system
	09cba02ad2598       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   8 minutes ago       Running             etcd                      1                   fce4c8d39b2df       etcd-ha-858120                      kube-system
	56f597b80ce9d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   8 minutes ago       Exited              kube-controller-manager   1                   5c07c6f41e66b       kube-controller-manager-ha-858120   kube-system
	7965630635b8c       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   8 minutes ago       Running             kube-scheduler            1                   8e060a6690898       kube-scheduler-ha-858120            kube-system
	
	
	==> coredns [dc932b06eb666402a72725d5039a2486a69ddd6c16dff73531dddef3a26cc8cd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60334 - 25267 "HINFO IN 5061499944827162834.2776303602288628219. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.03310744s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [f99357006a077698a85f223986d69f2d7d83e5bce90c1c2cc8ec2f393e14a413] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47468 - 22569 "HINFO IN 1283965037511611162.4618766947171906600. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.039278336s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-858120
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_18_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:18:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:30:34 +0000   Fri, 17 Oct 2025 20:19:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-858120
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                8074ca1f-e50b-46a3-ae2a-18fe40cb596a
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jw7vx             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-hc5rq             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-zfbms             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-858120                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-7bwxv                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-858120             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-858120    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5qtb8                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-858120             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-858120                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m56s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 14m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-858120 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           9m8s                   node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   Starting                 8m29s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m29s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m29s (x8 over 8m29s)  kubelet          Node ha-858120 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m29s (x8 over 8m29s)  kubelet          Node ha-858120 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m29s (x8 over 8m29s)  kubelet          Node ha-858120 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m51s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-858120 event: Registered Node ha-858120 in Controller
	
	
	Name:               ha-858120-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_19_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:19:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:33:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:33:16 +0000   Fri, 17 Oct 2025 20:24:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-858120-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                86212adb-5900-4e82-861f-965be14c377b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8kb7f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 etcd-ha-858120-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-n44c4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-858120-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-858120-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wzlp2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-858120-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-858120-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m                     kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m37s                  kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 9m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 9m46s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m45s (x9 over 9m46s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m45s (x8 over 9m46s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m45s (x7 over 9m46s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   NodeNotReady             9m13s                  node-controller  Node ha-858120-m02 status is now: NodeNotReady
	  Normal   RegisteredNode           9m8s                   node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Warning  CgroupV1                 8m25s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 8m25s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  8m25s (x8 over 8m25s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m25s (x8 over 8m25s)  kubelet          Node ha-858120-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m25s (x8 over 8m25s)  kubelet          Node ha-858120-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m51s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	  Normal   RegisteredNode           7m24s                  node-controller  Node ha-858120-m02 event: Registered Node ha-858120-m02 in Controller
	
	
	Name:               ha-858120-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-858120-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=ha-858120
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_10_17T20_22_19_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:22:18 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-858120-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:24:11 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 17 Oct 2025 20:23:01 +0000   Fri, 17 Oct 2025 20:26:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-858120-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                78570753-9906-4f75-b3e5-06c23a58a2cc
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-jl4tq       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-cn926    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-858120-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeReady                10m                kubelet          Node ha-858120-m04 status is now: NodeReady
	  Normal  RegisteredNode           9m9s               node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m52s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  RegisteredNode           7m25s              node-controller  Node ha-858120-m04 event: Registered Node ha-858120-m04 in Controller
	  Normal  NodeNotReady             7m2s               node-controller  Node ha-858120-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[Oct17 18:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:11] hrtimer: interrupt took 20156783 ns
	[Oct17 19:56] kauditd_printk_skb: 8 callbacks suppressed
	[Oct17 19:58] overlayfs: idmapped layers are currently not supported
	[  +0.072304] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Oct17 20:04] overlayfs: idmapped layers are currently not supported
	[Oct17 20:05] overlayfs: idmapped layers are currently not supported
	[Oct17 20:18] overlayfs: idmapped layers are currently not supported
	[Oct17 20:19] overlayfs: idmapped layers are currently not supported
	[Oct17 20:20] overlayfs: idmapped layers are currently not supported
	[Oct17 20:22] overlayfs: idmapped layers are currently not supported
	[Oct17 20:23] overlayfs: idmapped layers are currently not supported
	[Oct17 20:24] overlayfs: idmapped layers are currently not supported
	[Oct17 20:25] overlayfs: idmapped layers are currently not supported
	[ +32.795830] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [09cba02ad2598d6d8dbf7e7efe21a1ea91f7d9f9b4a697adc9b869ad7071c40b] <==
	{"level":"warn","ts":"2025-10-17T20:33:02.934965Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236719Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:04.236770Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939190Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:07.939286Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238618Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:08.238697Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240272Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.240343Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"c4547612b813713e","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940296Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:12.940377Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"c4547612b813713e","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-10-17T20:33:16.105926Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:33:16.112935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:33:16.133256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:39490","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:33:16.180694Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7668597845192143369 12593026477526642892)"}
	{"level":"info","ts":"2025-10-17T20:33:16.182674Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"c4547612b813713e","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-10-17T20:33:16.182729Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182771Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182804Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182869Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182891Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182907Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182945Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182957Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"c4547612b813713e"}
	{"level":"info","ts":"2025-10-17T20:33:16.182979Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeRemoveNode","raft-conf-change-node-id":"c4547612b813713e"}
	
	
	==> kernel <==
	 20:33:26 up  3:15,  0 user,  load average: 0.88, 0.88, 1.32
	Linux ha-858120 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97aba0e5d7c482a104be9a87cd7b78aec663a93d84c72a85316a204d1548cc16] <==
	I1017 20:32:48.629038       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:32:48.629095       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:48.629107       1 main.go:301] handling current node
	I1017 20:32:58.632651       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:32:58.632696       1 main.go:301] handling current node
	I1017 20:32:58.632712       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:32:58.632718       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:32:58.632922       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:32:58.632931       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:32:58.633040       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:32:58.633048       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.624264       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:33:08.624367       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:33:08.624604       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1017 20:33:08.624662       1 main.go:324] Node ha-858120-m03 has CIDR [10.244.2.0/24] 
	I1017 20:33:08.624908       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:33:08.624952       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	I1017 20:33:08.625258       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:33:08.625305       1 main.go:301] handling current node
	I1017 20:33:18.624829       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1017 20:33:18.624863       1 main.go:301] handling current node
	I1017 20:33:18.624879       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1017 20:33:18.624884       1 main.go:324] Node ha-858120-m02 has CIDR [10.244.1.0/24] 
	I1017 20:33:18.625048       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1017 20:33:18.625107       1 main.go:324] Node ha-858120-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [62a0a9e565cbdcc2535f376c89adec882f61fe061d0ec6760d840a514197add1] <==
	I1017 20:24:57.578002       1 server.go:150] Version: v1.34.1
	I1017 20:24:57.578115       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1017 20:24:59.609875       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1017 20:24:59.609983       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1017 20:24:59.610018       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1017 20:24:59.610051       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1017 20:24:59.610078       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1017 20:24:59.610108       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1017 20:24:59.610137       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1017 20:24:59.610165       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1017 20:24:59.610193       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1017 20:24:59.610224       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1017 20:24:59.610253       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1017 20:24:59.610280       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.1, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1017 20:24:59.718012       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1017 20:24:59.731289       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1017 20:24:59.735252       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1017 20:24:59.771928       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:24:59.798433       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1017 20:24:59.798556       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1017 20:24:59.798825       1 instance.go:239] Using reconciler: lease
	W1017 20:24:59.801239       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1017 20:25:19.716694       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1017 20:25:19.800493       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [638256daf481df23c6dc0c5f0e0206e9031fe11c02f69b76b36adebb4f77751b] <==
	I1017 20:25:27.747267       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:25:27.748048       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:25:27.748245       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 20:25:27.748283       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:25:27.755722       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 20:25:27.756801       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:25:27.756825       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:25:27.756833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:25:27.756839       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:25:27.757101       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:25:27.766983       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:25:27.769225       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:25:27.769488       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:25:27.769529       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:25:27.804915       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	W1017 20:25:27.812648       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3]
	I1017 20:25:27.814174       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:25:27.860617       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1017 20:25:27.881031       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1017 20:25:27.934437       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:25:28.456701       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1017 20:25:29.034012       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	I1017 20:25:34.663674       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:25:34.733687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 20:25:47.528926       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589] <==
	I1017 20:24:58.662485       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:25:01.671944       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1017 20:25:01.678227       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:01.684843       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1017 20:25:01.685107       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1017 20:25:01.685542       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1017 20:25:01.685556       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.598084       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [9ce296c3989a1de13e23cf6043950e41ef86d2754f0427491575c19984a6d824] <==
	I1017 20:25:34.378051       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:25:34.378105       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:25:34.378132       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:25:34.378160       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:25:34.387349       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 20:25:34.387499       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-858120-m04"
	I1017 20:25:34.392049       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:25:34.392153       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 20:25:34.403270       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:25:34.410392       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1017 20:25:34.420898       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:25:34.428383       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:25:34.429955       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:25:34.452565       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.452639       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:25:34.461211       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:25:34.461569       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:25:34.467835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:25:34.467866       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:25:34.467873       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:25:34.491229       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:25:34.517732       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 20:25:34.559435       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:31:26.912659       1 taint_eviction.go:111] "Deleting pod" logger="taint-eviction-controller" controller="taint-eviction-controller" pod="default/busybox-7b57f96db7-8llg5"
	E1017 20:33:16.861945       1 garbagecollector.go:360] "Unhandled Error" err="error syncing item &garbagecollector.node{identity:garbagecollector.objectReference{OwnerReference:v1.OwnerReference{APIVersion:\"storage.k8s.io/v1\", Kind:\"CSINode\", Name:\"ha-858120-m03\", UID:\"64932945-e010-4d53-bed9-a3728eabcfbb\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}, Namespace:\"\"}, dependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:1}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, dependents:map[*garbagecollector.node]struct {}{}, deletingDependents:false, deletingDependentsLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, beingDeleted:false, beingDeletedLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mut
ex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, virtual:false, virtualLock:sync.RWMutex{w:sync.Mutex{_:sync.noCopy{}, mu:sync.Mutex{state:0, sema:0x0}}, writerSem:0x0, readerSem:0x0, readerCount:atomic.Int32{_:atomic.noCopy{}, v:0}, readerWait:atomic.Int32{_:atomic.noCopy{}, v:0}}, owners:[]v1.OwnerReference{v1.OwnerReference{APIVersion:\"v1\", Kind:\"Node\", Name:\"ha-858120-m03\", UID:\"f4d35634-a9f7-49a6-89bb-703ca753c231\", Controller:(*bool)(nil), BlockOwnerDeletion:(*bool)(nil)}}}: csinodes.storage.k8s.io \"ha-858120-m03\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [30fbb87d1faca7dfc4d9f64b418999dbb75c40979544bddc3ad099cb9ad1a052] <==
	I1017 20:25:29.123150       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:25:29.213307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:25:29.325124       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:25:29.325227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1017 20:25:29.325371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:25:29.346595       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:25:29.346714       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:25:29.351735       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:25:29.352127       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:25:29.352304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:29.353581       1 config.go:200] "Starting service config controller"
	I1017 20:25:29.353705       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:25:29.353767       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:25:29.353798       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:25:29.353833       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:25:29.353860       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:25:29.354594       1 config.go:309] "Starting node config controller"
	I1017 20:25:29.357072       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:25:29.357126       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:25:29.454278       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:25:29.454382       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:25:29.454408       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7965630635b8cbdf5053400f9823a57e4067f90fb90d81f268bf4ed8379da2e6] <==
	I1017 20:25:27.558327       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:25:27.562717       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.562823       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:25:27.563148       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:25:27.563232       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1017 20:25:27.643472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:25:27.643574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:25:27.643643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:25:27.643704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:25:27.643780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:25:27.643853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:25:27.643897       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:25:27.643934       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 20:25:27.643978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:25:27.644027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:25:27.644071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:25:27.644291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:25:27.644343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:25:27.644389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:25:27.644437       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 20:25:27.644476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:25:27.644569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:25:27.644592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 20:25:27.685189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 20:25:29.163171       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891343     797 kubelet.go:3208] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.891630     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.901895     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-cni-cfg\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902145     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-xtables-lock\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902349     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-xtables-lock\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902474     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e536242d-87e2-4125-90c6-b8b7ce5c72cc-lib-modules\") pod \"kindnet-7bwxv\" (UID: \"e536242d-87e2-4125-90c6-b8b7ce5c72cc\") " pod="kube-system/kindnet-7bwxv"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902634     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e90d8e22-6ca9-4541-960c-4ecc95a31d5f-lib-modules\") pod \"kube-proxy-5qtb8\" (UID: \"e90d8e22-6ca9-4541-960c-4ecc95a31d5f\") " pod="kube-system/kube-proxy-5qtb8"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.902760     797 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f9e9dfd7-e90a-4da3-969d-2669daa3d123-tmp\") pod \"storage-provisioner\" (UID: \"f9e9dfd7-e90a-4da3-969d-2669daa3d123\") " pod="kube-system/storage-provisioner"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.923822     797 scope.go:117] "RemoveContainer" containerID="56f597b80ce9d7d4d8fe2f5fd196b39c7bbfa86ab1466771a978816f20b75589"
	Oct 17 20:25:27 ha-858120 kubelet[797]: E1017 20:25:27.935263     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-ha-858120\" already exists" pod="kube-system/etcd-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.935307     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:27 ha-858120 kubelet[797]: I1017 20:25:27.974898     797 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.006913     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ha-858120\" already exists" pod="kube-system/kube-apiserver-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.007149     797 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: E1017 20:25:28.035243     797 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-858120\" already exists" pod="kube-system/kube-controller-manager-ha-858120"
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.083154     797 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-858120" podStartSLOduration=1.083097255 podStartE2EDuration="1.083097255s" podCreationTimestamp="2025-10-17 20:25:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 20:25:28.06330839 +0000 UTC m=+31.423690073" watchObservedRunningTime="2025-10-17 20:25:28.083097255 +0000 UTC m=+31.443478937"
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.161072     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d WatchSource:0}: Error finding container b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d: Status 404 returned error can't find the container with id b8cc01892712db568e8731ba723a5e88e35f55eef7d6e2c190f2ff825c681e6d
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.182019     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b WatchSource:0}: Error finding container d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b: Status 404 returned error can't find the container with id d544ca125ccd8a6f780ca96c5d1a4f67ba40ad474571e10b3c223344aea6ac6b
	Oct 17 20:25:28 ha-858120 kubelet[797]: W1017 20:25:28.276364     797 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio-24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f WatchSource:0}: Error finding container 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f: Status 404 returned error can't find the container with id 24158395efe09635967efa4d36a567e806e5facd67c8db0e758f356488cff42f
	Oct 17 20:25:28 ha-858120 kubelet[797]: I1017 20:25:28.819931     797 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8957fe84f5b782b1a91a47b00072c3" path="/var/lib/kubelet/pods/fc8957fe84f5b782b1a91a47b00072c3/volumes"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.778662     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.778732     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a" err="rpc error: code = NotFound desc = could not find container \"4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a\": container with ID starting with 4ff297484ba1b0347f9dd01adc3315df1c8e90c5a9177cc65452dce0aecc5b0a not found: ID does not exist"
	Oct 17 20:25:56 ha-858120 kubelet[797]: E1017 20:25:56.779364     797 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c"
	Oct 17 20:25:56 ha-858120 kubelet[797]: I1017 20:25:56.779405     797 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c" err="rpc error: code = NotFound desc = could not find container \"9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c\": container with ID starting with 9d10b28326092247bf5881ace538cc4c3e32945ea8953d15f1bd7bd34ded739c not found: ID does not exist"
	Oct 17 20:25:59 ha-858120 kubelet[797]: I1017 20:25:59.061021     797 scope.go:117] "RemoveContainer" containerID="e299f9f677259417858bfdf991397b3ef57a6485f2baf285eaece413087c058b"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-858120 -n ha-858120
helpers_test.go:269: (dbg) Run:  kubectl --context ha-858120 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-twgcq
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq
helpers_test.go:290: (dbg) kubectl --context ha-858120 describe pod busybox-7b57f96db7-twgcq:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-twgcq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-b2h7l (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-b2h7l:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age   From               Message
	  ----     ------            ----  ----               -------
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  2m    default-scheduler  0/4 nodes are available: 2 node(s) didn't match pod anti-affinity rules, 2 node(s) had untolerated taint {node.kubernetes.io/unreachable: }. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.
	  Warning  FailedScheduling  12s   default-scheduler  0/4 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/unreachable: }, 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/4 nodes are available: 2 No preemption victims found for incoming pod, 2 Preemption is not helpful for scheduling.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (3.33s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.91s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-734339 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-734339 --output=json --user=testUser: exit status 80 (1.911648505s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9c8604d8-9fb1-4990-bad5-ae4181ff5de5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-734339 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"9759342b-40b0-4c1d-b207-61fb2acf5b4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T20:38:19Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"6b570c56-3f19-4f83-af03-0749ecd2bfaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-734339 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.91s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-734339 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-734339 --output=json --user=testUser: exit status 80 (2.003401493s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bd7018cd-dd3c-409b-98ed-d86aec381ca5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-734339 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"854c6a08-9d86-46be-988c-b4a4b088990b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-17T20:38:21Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e01b806b-e3b0-40ba-aef2-f91ce0519580","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-734339 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.00s)

                                                
                                    
x
+
TestPause/serial/Pause (9.1s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-017644 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-017644 --alsologtostderr -v=5: exit status 80 (2.586002976s)

                                                
                                                
-- stdout --
	* Pausing node pause-017644 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:57:20.909466  729401 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:57:20.909619  729401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:20.909632  729401 out.go:374] Setting ErrFile to fd 2...
	I1017 20:57:20.909638  729401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:20.909909  729401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:57:20.910162  729401 out.go:368] Setting JSON to false
	I1017 20:57:20.910190  729401 mustload.go:65] Loading cluster: pause-017644
	I1017 20:57:20.910602  729401 config.go:182] Loaded profile config "pause-017644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:57:20.911145  729401 cli_runner.go:164] Run: docker container inspect pause-017644 --format={{.State.Status}}
	I1017 20:57:20.930231  729401 host.go:66] Checking if "pause-017644" exists ...
	I1017 20:57:20.930581  729401 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:57:20.994951  729401 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:57:20.984515643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:57:20.995674  729401 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-017644 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 20:57:20.999137  729401 out.go:179] * Pausing node pause-017644 ... 
	I1017 20:57:21.002065  729401 host.go:66] Checking if "pause-017644" exists ...
	I1017 20:57:21.002453  729401 ssh_runner.go:195] Run: systemctl --version
	I1017 20:57:21.002527  729401 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-017644
	I1017 20:57:21.022950  729401 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33717 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/pause-017644/id_rsa Username:docker}
	I1017 20:57:21.125864  729401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:57:21.142270  729401 pause.go:52] kubelet running: true
	I1017 20:57:21.142339  729401 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:57:21.358371  729401 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:57:21.358523  729401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:57:21.429630  729401 cri.go:89] found id: "6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48"
	I1017 20:57:21.429653  729401 cri.go:89] found id: "8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358"
	I1017 20:57:21.429658  729401 cri.go:89] found id: "821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a"
	I1017 20:57:21.429663  729401 cri.go:89] found id: "e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f"
	I1017 20:57:21.429667  729401 cri.go:89] found id: "39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1"
	I1017 20:57:21.429670  729401 cri.go:89] found id: "c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80"
	I1017 20:57:21.429673  729401 cri.go:89] found id: "373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12"
	I1017 20:57:21.429676  729401 cri.go:89] found id: "10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f"
	I1017 20:57:21.429679  729401 cri.go:89] found id: "5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	I1017 20:57:21.429685  729401 cri.go:89] found id: "808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9"
	I1017 20:57:21.429688  729401 cri.go:89] found id: "0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391"
	I1017 20:57:21.429692  729401 cri.go:89] found id: "0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06"
	I1017 20:57:21.429694  729401 cri.go:89] found id: "1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22"
	I1017 20:57:21.429698  729401 cri.go:89] found id: "7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	I1017 20:57:21.429701  729401 cri.go:89] found id: ""
	I1017 20:57:21.429750  729401 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:57:21.441162  729401 retry.go:31] will retry after 266.9151ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:57:21Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:57:21.708630  729401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:57:21.721360  729401 pause.go:52] kubelet running: false
	I1017 20:57:21.721423  729401 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:57:21.920695  729401 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:57:21.920787  729401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:57:22.030571  729401 cri.go:89] found id: "6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48"
	I1017 20:57:22.030596  729401 cri.go:89] found id: "8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358"
	I1017 20:57:22.030601  729401 cri.go:89] found id: "821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a"
	I1017 20:57:22.030605  729401 cri.go:89] found id: "e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f"
	I1017 20:57:22.030608  729401 cri.go:89] found id: "39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1"
	I1017 20:57:22.030612  729401 cri.go:89] found id: "c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80"
	I1017 20:57:22.030615  729401 cri.go:89] found id: "373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12"
	I1017 20:57:22.030619  729401 cri.go:89] found id: "10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f"
	I1017 20:57:22.030622  729401 cri.go:89] found id: "5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	I1017 20:57:22.030628  729401 cri.go:89] found id: "808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9"
	I1017 20:57:22.030631  729401 cri.go:89] found id: "0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391"
	I1017 20:57:22.030634  729401 cri.go:89] found id: "0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06"
	I1017 20:57:22.030642  729401 cri.go:89] found id: "1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22"
	I1017 20:57:22.030650  729401 cri.go:89] found id: "7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	I1017 20:57:22.030654  729401 cri.go:89] found id: ""
	I1017 20:57:22.030715  729401 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:57:22.057286  729401 retry.go:31] will retry after 256.633399ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:57:22.314786  729401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:57:22.330978  729401 pause.go:52] kubelet running: false
	I1017 20:57:22.331056  729401 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:57:22.515833  729401 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:57:22.515912  729401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:57:22.626359  729401 cri.go:89] found id: "6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48"
	I1017 20:57:22.626380  729401 cri.go:89] found id: "8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358"
	I1017 20:57:22.626385  729401 cri.go:89] found id: "821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a"
	I1017 20:57:22.626389  729401 cri.go:89] found id: "e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f"
	I1017 20:57:22.626392  729401 cri.go:89] found id: "39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1"
	I1017 20:57:22.626395  729401 cri.go:89] found id: "c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80"
	I1017 20:57:22.626398  729401 cri.go:89] found id: "373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12"
	I1017 20:57:22.626401  729401 cri.go:89] found id: "10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f"
	I1017 20:57:22.626404  729401 cri.go:89] found id: "5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	I1017 20:57:22.626410  729401 cri.go:89] found id: "808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9"
	I1017 20:57:22.626413  729401 cri.go:89] found id: "0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391"
	I1017 20:57:22.626416  729401 cri.go:89] found id: "0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06"
	I1017 20:57:22.626419  729401 cri.go:89] found id: "1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22"
	I1017 20:57:22.626425  729401 cri.go:89] found id: "7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	I1017 20:57:22.626428  729401 cri.go:89] found id: ""
	I1017 20:57:22.626480  729401 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:57:22.639303  729401 retry.go:31] will retry after 422.545171ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:57:22Z" level=error msg="open /run/runc: no such file or directory"
	I1017 20:57:23.063008  729401 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:57:23.079561  729401 pause.go:52] kubelet running: false
	I1017 20:57:23.079628  729401 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 20:57:23.277924  729401 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 20:57:23.277999  729401 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 20:57:23.392842  729401 cri.go:89] found id: "6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48"
	I1017 20:57:23.392869  729401 cri.go:89] found id: "8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358"
	I1017 20:57:23.392874  729401 cri.go:89] found id: "821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a"
	I1017 20:57:23.392878  729401 cri.go:89] found id: "e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f"
	I1017 20:57:23.392881  729401 cri.go:89] found id: "39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1"
	I1017 20:57:23.392884  729401 cri.go:89] found id: "c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80"
	I1017 20:57:23.392887  729401 cri.go:89] found id: "373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12"
	I1017 20:57:23.392890  729401 cri.go:89] found id: "10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f"
	I1017 20:57:23.392893  729401 cri.go:89] found id: "5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	I1017 20:57:23.392899  729401 cri.go:89] found id: "808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9"
	I1017 20:57:23.392902  729401 cri.go:89] found id: "0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391"
	I1017 20:57:23.392905  729401 cri.go:89] found id: "0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06"
	I1017 20:57:23.392909  729401 cri.go:89] found id: "1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22"
	I1017 20:57:23.392912  729401 cri.go:89] found id: "7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	I1017 20:57:23.392915  729401 cri.go:89] found id: ""
	I1017 20:57:23.392959  729401 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 20:57:23.416471  729401 out.go:203] 
	W1017 20:57:23.419431  729401 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:57:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 20:57:23.419458  729401 out.go:285] * 
	* 
	W1017 20:57:23.427165  729401 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 20:57:23.431942  729401 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-017644 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-017644
helpers_test.go:243: (dbg) docker inspect pause-017644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e",
	        "Created": "2025-10-17T20:55:26.184153398Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 716921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:55:26.288761759Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/hosts",
	        "LogPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e-json.log",
	        "Name": "/pause-017644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-017644:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-017644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e",
	                "LowerDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-017644",
	                "Source": "/var/lib/docker/volumes/pause-017644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-017644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-017644",
	                "name.minikube.sigs.k8s.io": "pause-017644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c586331a921393db01709d3bd1990ecedc62a0df1aec22e7cd3e75eed7d3ec98",
	            "SandboxKey": "/var/run/docker/netns/c586331a9213",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33717"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33718"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33721"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33719"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33720"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-017644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:2e:e4:89:1c:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e2997b8968c2c42de85a0610f85e8c79d73a6593ee0473cb82afbaac0cbab4b",
	                    "EndpointID": "fc6171aec4cd0116b31c513e249ffde42facbf1c913405bf6ddb62ba56a84050",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-017644",
	                        "03a0ed07c4e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-017644 -n pause-017644
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-017644 -n pause-017644: exit status 2 (459.621524ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-017644 logs -n 25
E1017 20:57:25.233317  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-017644 logs -n 25: (1.650375424s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-667721 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat docker --no-pager                                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/docker/daemon.json                                                                           │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo docker system info                                                                                    │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cri-dockerd --version                                                                                 │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat containerd --no-pager                                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/containerd/config.toml                                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo containerd config dump                                                                                │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat crio --no-pager                                                                         │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo crio config                                                                                           │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ delete  │ -p cilium-667721                                                                                                            │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:56 UTC │
	│ start   │ -p force-systemd-env-762621 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-762621  │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:57 UTC │
	│ start   │ -p pause-017644 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-017644              │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:57 UTC │
	│ delete  │ -p force-systemd-env-762621                                                                                                 │ force-systemd-env-762621  │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │ 17 Oct 25 20:57 UTC │
	│ pause   │ -p pause-017644 --alsologtostderr -v=5                                                                                      │ pause-017644              │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │                     │
	│ start   │ -p force-systemd-flag-758295 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-758295 │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:57:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:57:22.663909  729755 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:57:22.664127  729755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:22.664140  729755 out.go:374] Setting ErrFile to fd 2...
	I1017 20:57:22.664146  729755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:22.664437  729755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:57:22.664882  729755 out.go:368] Setting JSON to false
	I1017 20:57:22.665873  729755 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13189,"bootTime":1760721454,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:57:22.665942  729755 start.go:141] virtualization:  
	I1017 20:57:22.669722  729755 out.go:179] * [force-systemd-flag-758295] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:57:22.674402  729755 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:57:22.674559  729755 notify.go:220] Checking for updates...
	I1017 20:57:22.681282  729755 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:57:22.684663  729755 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:57:22.688015  729755 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:57:22.691255  729755 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:57:22.694401  729755 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:57:22.698096  729755 config.go:182] Loaded profile config "pause-017644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:57:22.698222  729755 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:57:22.721046  729755 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:57:22.721176  729755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:57:22.784782  729755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:57:22.776019393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:57:22.784897  729755 docker.go:318] overlay module found
	I1017 20:57:22.788286  729755 out.go:179] * Using the docker driver based on user configuration
	I1017 20:57:22.791253  729755 start.go:305] selected driver: docker
	I1017 20:57:22.791273  729755 start.go:925] validating driver "docker" against <nil>
	I1017 20:57:22.791288  729755 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:57:22.792002  729755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:57:22.843881  729755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:57:22.835213513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:57:22.844037  729755 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:57:22.844256  729755 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 20:57:22.847337  729755 out.go:179] * Using Docker driver with root privileges
	I1017 20:57:22.850276  729755 cni.go:84] Creating CNI manager for ""
	I1017 20:57:22.850355  729755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:57:22.850368  729755 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:57:22.850449  729755 start.go:349] cluster config:
	{Name:force-systemd-flag-758295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-758295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:57:22.855533  729755 out.go:179] * Starting "force-systemd-flag-758295" primary control-plane node in "force-systemd-flag-758295" cluster
	I1017 20:57:22.858477  729755 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:57:22.861520  729755 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:57:22.864359  729755 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:57:22.864421  729755 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:57:22.864435  729755 cache.go:58] Caching tarball of preloaded images
	I1017 20:57:22.864463  729755 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:57:22.864533  729755 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:57:22.864544  729755 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:57:22.864643  729755 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/force-systemd-flag-758295/config.json ...
	I1017 20:57:22.864660  729755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/force-systemd-flag-758295/config.json: {Name:mk9b831ffeede480744b3856cc42ac4dee25c07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:57:22.884289  729755 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:57:22.884313  729755 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:57:22.884333  729755 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:57:22.884362  729755 start.go:360] acquireMachinesLock for force-systemd-flag-758295: {Name:mkfee4cb5251530b0392f328a8059e0b313e6283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:57:22.884477  729755 start.go:364] duration metric: took 93.778µs to acquireMachinesLock for "force-systemd-flag-758295"
	I1017 20:57:22.884507  729755 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-758295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-758295 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:57:22.884577  729755 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.782054409Z" level=info msg="Created container 821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a: kube-system/kube-apiserver-pause-017644/kube-apiserver" id=13d05865-0f74-4538-91f7-86e9e0376f69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.80799007Z" level=info msg="Starting container: 821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a" id=2fd003fa-6fca-4984-b850-5de842e98196 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.810972239Z" level=info msg="Started container" PID=2362 containerID=e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f description=kube-system/coredns-66bc5c9577-nlqlq/coredns id=c0bb9a46-cfe4-4af7-8115-125888c3dec5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c28a087f138b3183e09d7227d75fd45093739ba9c3e0ae8a1059abd225a5e7
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.843680733Z" level=info msg="Started container" PID=2380 containerID=821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a description=kube-system/kube-apiserver-pause-017644/kube-apiserver id=2fd003fa-6fca-4984-b850-5de842e98196 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9450d2c31dca40dcd44d2b68864b6b0a579522066584120ecdb3a1923963ec76
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.945616057Z" level=info msg="Created container 6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48: kube-system/kube-controller-manager-pause-017644/kube-controller-manager" id=5ea734d2-03fd-44da-b60c-c897dd55c82c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.946703013Z" level=info msg="Starting container: 6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48" id=462f4f69-86e5-4f4a-8e6b-bf933b8f3969 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.95141721Z" level=info msg="Started container" PID=2409 containerID=6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48 description=kube-system/kube-controller-manager-pause-017644/kube-controller-manager id=462f4f69-86e5-4f4a-8e6b-bf933b8f3969 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e19caf777e0c87d4568c09b027480020295eb6f13db62b0f42816d51cbe29402
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.980227469Z" level=info msg="Created container 8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358: kube-system/kindnet-5vj4v/kindnet-cni" id=37c1e7d3-453e-4e47-bc51-35a208f1c2df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.981237534Z" level=info msg="Starting container: 8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358" id=073e22ef-f255-4df4-9b88-1a839666f26f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.990833684Z" level=info msg="Started container" PID=2403 containerID=8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358 description=kube-system/kindnet-5vj4v/kindnet-cni id=073e22ef-f255-4df4-9b88-1a839666f26f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd7db9daf42ac9c30ff21b51872bdd06ea2257a16d172b5ddf542977d837d4e8
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.452184535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459416001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459591396Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459666851Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.463035151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.46536891Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.465472247Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475345266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475386817Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475410974Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480784904Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480824084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480847896Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.48761625Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.487660337Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6027191e958ff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   21 seconds ago       Running             kube-controller-manager   1                   e19caf777e0c8       kube-controller-manager-pause-017644   kube-system
	8564bb112f423       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   fd7db9daf42ac       kindnet-5vj4v                          kube-system
	821b30b913e83       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   21 seconds ago       Running             kube-apiserver            1                   9450d2c31dca4       kube-apiserver-pause-017644            kube-system
	e4588c2d818f2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   22 seconds ago       Running             coredns                   1                   c3c28a087f138       coredns-66bc5c9577-nlqlq               kube-system
	39beaf9b5b945       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   22 seconds ago       Running             etcd                      1                   28cdf41c09994       etcd-pause-017644                      kube-system
	c268ddf0b2b94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   22 seconds ago       Running             kube-scheduler            1                   ea6d14860a0cb       kube-scheduler-pause-017644            kube-system
	373484b72bb78       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   22 seconds ago       Running             kube-proxy                1                   a2a507556f039       kube-proxy-vvtk6                       kube-system
	10258541f312b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   37 seconds ago       Exited              coredns                   0                   c3c28a087f138       coredns-66bc5c9577-nlqlq               kube-system
	5e4222228ece2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   fd7db9daf42ac       kindnet-5vj4v                          kube-system
	808520cfab210       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   a2a507556f039       kube-proxy-vvtk6                       kube-system
	0f3263b7238f9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   28cdf41c09994       etcd-pause-017644                      kube-system
	0238c4af7be73       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   9450d2c31dca4       kube-apiserver-pause-017644            kube-system
	1d69fd2550709       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ea6d14860a0cb       kube-scheduler-pause-017644            kube-system
	7ec82d0ca5a88       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e19caf777e0c8       kube-controller-manager-pause-017644   kube-system
	
	
	==> coredns [10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49853 - 43808 "HINFO IN 3211772666261551472.4706601488496372243. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031544524s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43410 - 8553 "HINFO IN 8800095208714214793.4963916525816000312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032961125s
	
	
	==> describe nodes <==
	Name:               pause-017644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-017644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=pause-017644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_56_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:55:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-017644
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:57:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-017644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d6ec9904-ec33-4178-82ec-30d1ad057cde
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nlqlq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-017644                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-5vj4v                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-017644             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-017644    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-vvtk6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-017644             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 78s                kube-proxy       
	  Normal   Starting                 14s                kube-proxy       
	  Normal   NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node pause-017644 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node pause-017644 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s (x8 over 97s)  kubelet          Node pause-017644 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 85s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 85s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-017644 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-017644 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-017644 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-017644 event: Registered Node pause-017644 in Controller
	  Normal   NodeReady                38s                kubelet          Node pause-017644 status is now: NodeReady
	  Warning  ContainerGCFailed        25s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           11s                node-controller  Node pause-017644 event: Registered Node pause-017644 in Controller
	
	
	==> dmesg <==
	[Oct17 20:24] overlayfs: idmapped layers are currently not supported
	[Oct17 20:25] overlayfs: idmapped layers are currently not supported
	[ +32.795830] overlayfs: idmapped layers are currently not supported
	[Oct17 20:33] overlayfs: idmapped layers are currently not supported
	[Oct17 20:34] overlayfs: idmapped layers are currently not supported
	[ +42.751418] overlayfs: idmapped layers are currently not supported
	[Oct17 20:35] overlayfs: idmapped layers are currently not supported
	[Oct17 20:37] overlayfs: idmapped layers are currently not supported
	[Oct17 20:42] overlayfs: idmapped layers are currently not supported
	[Oct17 20:43] overlayfs: idmapped layers are currently not supported
	[Oct17 20:44] overlayfs: idmapped layers are currently not supported
	[Oct17 20:45] overlayfs: idmapped layers are currently not supported
	[Oct17 20:46] overlayfs: idmapped layers are currently not supported
	[Oct17 20:48] overlayfs: idmapped layers are currently not supported
	[ +27.124680] overlayfs: idmapped layers are currently not supported
	[  +8.199606] overlayfs: idmapped layers are currently not supported
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391] <==
	{"level":"warn","ts":"2025-10-17T20:55:55.174899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.195408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.215921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.249656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.279515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.309732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.398170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:56:52.033776Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T20:56:52.033824Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-017644","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-17T20:56:52.033902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:56:52.181944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:56:52.182048Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.182102Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182100Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182126Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:56:52.182134Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.182158Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-17T20:56:52.182168Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182212Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182236Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:56:52.182243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.185326Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-17T20:56:52.185406Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.185436Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:56:52.185446Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-017644","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1] <==
	{"level":"warn","ts":"2025-10-17T20:57:08.001852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.037944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.059681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.103507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.153315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.162704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.195487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.216105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.240461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.317552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.321511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.340418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.372182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.386579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.419243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.442029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.471712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.518477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.545996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.576411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.635334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.652058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.715765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.812715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35318","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:57:25 up  3:39,  0 user,  load average: 4.54, 2.71, 1.99
	Linux pause-017644 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2] <==
	I1017 20:56:06.142683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:56:06.143274       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:56:06.143454       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:56:06.143497       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:56:06.143537       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:56:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:56:06.330483       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:56:06.330560       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:56:06.330596       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:56:06.331399       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:56:36.330624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:56:36.331998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:56:36.332153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:56:36.332230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 20:56:37.531663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:56:37.531694       1 metrics.go:72] Registering metrics
	I1017 20:56:37.531747       1 controller.go:711] "Syncing nftables rules"
	I1017 20:56:46.331736       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:56:46.331859       1 main.go:301] handling current node
	
	
	==> kindnet [8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358] <==
	I1017 20:57:03.137029       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:57:03.137285       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:57:03.137413       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:57:03.137433       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:57:03.137447       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:57:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:57:03.451780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:57:03.451807       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:57:03.451816       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:57:03.452513       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:57:10.652389       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:57:10.652429       1 metrics.go:72] Registering metrics
	I1017 20:57:10.652510       1 controller.go:711] "Syncing nftables rules"
	I1017 20:57:13.451744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:57:13.451833       1 main.go:301] handling current node
	I1017 20:57:23.452255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:57:23.452290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06] <==
	W1017 20:56:52.045724       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046131       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046241       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046331       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046496       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046544       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051540       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051630       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051695       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051752       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051807       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051853       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051911       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052273       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052399       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052489       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052595       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052692       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052781       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052870       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052958       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053478       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053604       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053812       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053968       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a] <==
	I1017 20:57:10.605271       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:57:10.615378       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:57:10.615405       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:57:10.615966       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:57:10.615995       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:57:10.616075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:57:10.616247       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:57:10.616280       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:57:10.616312       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:57:10.647390       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:57:10.647517       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:57:10.647542       1 policy_source.go:240] refreshing policies
	I1017 20:57:10.647723       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:57:10.648626       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:57:10.648652       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:57:10.648659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:57:10.648667       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:57:10.667438       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 20:57:10.683897       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:57:10.912344       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:57:12.424008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:57:13.933813       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:57:13.981594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:57:14.033333       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:57:14.144671       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48] <==
	I1017 20:57:13.674235       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:57:13.674249       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:57:13.674805       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:57:13.674817       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:57:13.674826       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:57:13.680995       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:57:13.682392       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:57:13.683606       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:57:13.683701       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:57:13.683771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:57:13.683801       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:57:13.683840       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:57:13.684181       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:57:13.686475       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:57:13.686641       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:57:13.686685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:57:13.699412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:57:13.711539       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:57:13.712508       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:57:13.712587       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:57:13.723903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:57:13.731245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:57:13.731345       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:57:13.731376       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:57:13.775204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab] <==
	I1017 20:56:04.297752       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:56:04.302669       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:56:04.302765       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:56:04.302867       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:56:04.303928       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-017644" podCIDRs=["10.244.0.0/24"]
	I1017 20:56:04.304105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:56:04.304156       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:56:04.304683       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-017644"
	I1017 20:56:04.304746       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:56:04.308921       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:56:04.314088       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:56:04.314175       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:56:04.315271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:56:04.325488       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:56:04.328748       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:56:04.334397       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:56:04.334635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:56:04.334502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:56:04.335517       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:56:04.335794       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:56:04.335808       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:56:04.335853       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:56:04.338561       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:56:04.356411       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:56:49.309791       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12] <==
	I1017 20:57:05.750744       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:57:07.927835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:57:10.647212       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:57:10.651227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:57:10.653401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:57:10.795797       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:57:10.795914       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:57:10.811387       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:57:10.811761       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:57:10.811961       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:57:10.813225       1 config.go:200] "Starting service config controller"
	I1017 20:57:10.818608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:57:10.818679       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:57:10.818709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:57:10.818760       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:57:10.818798       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:57:10.819549       1 config.go:309] "Starting node config controller"
	I1017 20:57:10.819613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:57:10.819644       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:57:10.923841       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:57:10.923956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:57:10.923976       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9] <==
	I1017 20:56:06.155285       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:56:06.273557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:56:06.375197       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:56:06.375237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:56:06.375317       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:56:06.459506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:56:06.459625       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:56:06.464228       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:56:06.464586       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:56:06.464603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:56:06.465873       1 config.go:200] "Starting service config controller"
	I1017 20:56:06.465894       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:56:06.465913       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:56:06.465922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:56:06.465936       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:56:06.465940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:56:06.467756       1 config.go:309] "Starting node config controller"
	I1017 20:56:06.467837       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:56:06.467870       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:56:06.566833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:56:06.566839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:56:06.566872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22] <==
	E1017 20:55:56.584819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:55:56.584959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:55:56.585069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:55:56.585180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:55:56.585211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:55:56.585905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:55:56.586269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:55:56.596006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:55:57.417140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:55:57.581518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:55:57.588695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:55:57.652800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:55:57.652975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:55:57.684396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:55:57.725148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:55:57.743247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:55:57.747050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:55:57.848273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 20:56:00.931829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:56:52.032911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 20:56:52.033094       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 20:56:52.033110       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 20:56:52.033132       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:56:52.033414       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 20:56:52.033434       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80] <==
	I1017 20:57:07.625114       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:57:10.743852       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:57:10.743890       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:57:10.762402       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:57:10.763189       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:57:10.763245       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:57:10.799449       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:57:10.763266       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.799594       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.763257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:57:10.800214       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:57:10.899969       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.900432       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:57:11.003194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.538473    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.581205    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="33f2ad75e8aeec3fb0cbd2f5adf31d50" pod="kube-system/kube-apiserver-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: I1017 20:57:02.581309    1299 scope.go:117] "RemoveContainer" containerID="7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582029    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="82e2b57fb25a75d2e88b2eca31dd4bf0" pod="kube-system/etcd-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582489    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vvtk6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582970    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nlqlq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.583596    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b609a7cc9a16ba87539db3711f84efa1" pod="kube-system/kube-controller-manager-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.583979    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: I1017 20:57:02.587092    1299 scope.go:117] "RemoveContainer" containerID="5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.587983    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b609a7cc9a16ba87539db3711f84efa1" pod="kube-system/kube-controller-manager-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.588445    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.588792    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="33f2ad75e8aeec3fb0cbd2f5adf31d50" pod="kube-system/kube-apiserver-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.589220    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="82e2b57fb25a75d2e88b2eca31dd4bf0" pod="kube-system/etcd-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.589632    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vvtk6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.590040    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5vj4v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="51e113cd-b1a8-4675-9b2d-28040c57ba2b" pod="kube-system/kindnet-5vj4v"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.590425    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nlqlq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.081530    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-vvtk6\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.263413    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5vj4v\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="51e113cd-b1a8-4675-9b2d-28040c57ba2b" pod="kube-system/kindnet-5vj4v"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.403344    1299 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-017644\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.403446    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nlqlq\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: W1017 20:57:10.623740    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 20:57:20 pause-017644 kubelet[1299]: W1017 20:57:20.642184    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 20:57:21 pause-017644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:57:21 pause-017644 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:57:21 pause-017644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-017644 -n pause-017644
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-017644 -n pause-017644: exit status 2 (449.174418ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-017644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-017644
helpers_test.go:243: (dbg) docker inspect pause-017644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e",
	        "Created": "2025-10-17T20:55:26.184153398Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 716921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T20:55:26.288761759Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/hostname",
	        "HostsPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/hosts",
	        "LogPath": "/var/lib/docker/containers/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e/03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e-json.log",
	        "Name": "/pause-017644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-017644:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-017644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "03a0ed07c4e1b25703475cc838af330992f3c24f013268cab9bafeccebd5b53e",
	                "LowerDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a788a533bfb05895c29a666c795040e67e5aaad6dcd891589edc3a09b9f3bb4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-017644",
	                "Source": "/var/lib/docker/volumes/pause-017644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-017644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-017644",
	                "name.minikube.sigs.k8s.io": "pause-017644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "c586331a921393db01709d3bd1990ecedc62a0df1aec22e7cd3e75eed7d3ec98",
	            "SandboxKey": "/var/run/docker/netns/c586331a9213",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33717"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33718"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33721"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33719"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33720"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-017644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:2e:e4:89:1c:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e2997b8968c2c42de85a0610f85e8c79d73a6593ee0473cb82afbaac0cbab4b",
	                    "EndpointID": "fc6171aec4cd0116b31c513e249ffde42facbf1c913405bf6ddb62ba56a84050",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-017644",
	                        "03a0ed07c4e1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-017644 -n pause-017644
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-017644 -n pause-017644: exit status 2 (485.861207ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-017644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-017644 logs -n 25: (2.593312926s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-667721 sudo cat /var/lib/kubelet/config.yaml                                                                      │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status docker --all --full --no-pager                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat docker --no-pager                                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/docker/daemon.json                                                                           │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo docker system info                                                                                    │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status cri-docker --all --full --no-pager                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat cri-docker --no-pager                                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                              │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                        │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cri-dockerd --version                                                                                 │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status containerd --all --full --no-pager                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat containerd --no-pager                                                                   │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /lib/systemd/system/containerd.service                                                            │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo cat /etc/containerd/config.toml                                                                       │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo containerd config dump                                                                                │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl status crio --all --full --no-pager                                                         │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo systemctl cat crio --no-pager                                                                         │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                               │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ ssh     │ -p cilium-667721 sudo crio config                                                                                           │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │                     │
	│ delete  │ -p cilium-667721                                                                                                            │ cilium-667721             │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:56 UTC │
	│ start   │ -p force-systemd-env-762621 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-762621  │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:57 UTC │
	│ start   │ -p pause-017644 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-017644              │ jenkins │ v1.37.0 │ 17 Oct 25 20:56 UTC │ 17 Oct 25 20:57 UTC │
	│ delete  │ -p force-systemd-env-762621                                                                                                 │ force-systemd-env-762621  │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │ 17 Oct 25 20:57 UTC │
	│ pause   │ -p pause-017644 --alsologtostderr -v=5                                                                                      │ pause-017644              │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │                     │
	│ start   │ -p force-systemd-flag-758295 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-758295 │ jenkins │ v1.37.0 │ 17 Oct 25 20:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 20:57:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 20:57:22.663909  729755 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:57:22.664127  729755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:22.664140  729755 out.go:374] Setting ErrFile to fd 2...
	I1017 20:57:22.664146  729755 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:57:22.664437  729755 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:57:22.664882  729755 out.go:368] Setting JSON to false
	I1017 20:57:22.665873  729755 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13189,"bootTime":1760721454,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:57:22.665942  729755 start.go:141] virtualization:  
	I1017 20:57:22.669722  729755 out.go:179] * [force-systemd-flag-758295] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:57:22.674402  729755 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:57:22.674559  729755 notify.go:220] Checking for updates...
	I1017 20:57:22.681282  729755 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:57:22.684663  729755 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:57:22.688015  729755 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:57:22.691255  729755 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:57:22.694401  729755 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:57:22.698096  729755 config.go:182] Loaded profile config "pause-017644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:57:22.698222  729755 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:57:22.721046  729755 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:57:22.721176  729755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:57:22.784782  729755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:57:22.776019393 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:57:22.784897  729755 docker.go:318] overlay module found
	I1017 20:57:22.788286  729755 out.go:179] * Using the docker driver based on user configuration
	I1017 20:57:22.791253  729755 start.go:305] selected driver: docker
	I1017 20:57:22.791273  729755 start.go:925] validating driver "docker" against <nil>
	I1017 20:57:22.791288  729755 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:57:22.792002  729755 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:57:22.843881  729755 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:57:22.835213513 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:57:22.844037  729755 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 20:57:22.844256  729755 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 20:57:22.847337  729755 out.go:179] * Using Docker driver with root privileges
	I1017 20:57:22.850276  729755 cni.go:84] Creating CNI manager for ""
	I1017 20:57:22.850355  729755 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 20:57:22.850368  729755 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 20:57:22.850449  729755 start.go:349] cluster config:
	{Name:force-systemd-flag-758295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-758295 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:57:22.855533  729755 out.go:179] * Starting "force-systemd-flag-758295" primary control-plane node in "force-systemd-flag-758295" cluster
	I1017 20:57:22.858477  729755 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 20:57:22.861520  729755 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 20:57:22.864359  729755 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 20:57:22.864421  729755 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 20:57:22.864435  729755 cache.go:58] Caching tarball of preloaded images
	I1017 20:57:22.864463  729755 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 20:57:22.864533  729755 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 20:57:22.864544  729755 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 20:57:22.864643  729755 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/force-systemd-flag-758295/config.json ...
	I1017 20:57:22.864660  729755 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/force-systemd-flag-758295/config.json: {Name:mk9b831ffeede480744b3856cc42ac4dee25c07b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 20:57:22.884289  729755 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 20:57:22.884313  729755 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 20:57:22.884333  729755 cache.go:232] Successfully downloaded all kic artifacts
	I1017 20:57:22.884362  729755 start.go:360] acquireMachinesLock for force-systemd-flag-758295: {Name:mkfee4cb5251530b0392f328a8059e0b313e6283 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 20:57:22.884477  729755 start.go:364] duration metric: took 93.778µs to acquireMachinesLock for "force-systemd-flag-758295"
	I1017 20:57:22.884507  729755 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-758295 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-758295 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 20:57:22.884577  729755 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.782054409Z" level=info msg="Created container 821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a: kube-system/kube-apiserver-pause-017644/kube-apiserver" id=13d05865-0f74-4538-91f7-86e9e0376f69 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.80799007Z" level=info msg="Starting container: 821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a" id=2fd003fa-6fca-4984-b850-5de842e98196 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.810972239Z" level=info msg="Started container" PID=2362 containerID=e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f description=kube-system/coredns-66bc5c9577-nlqlq/coredns id=c0bb9a46-cfe4-4af7-8115-125888c3dec5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3c28a087f138b3183e09d7227d75fd45093739ba9c3e0ae8a1059abd225a5e7
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.843680733Z" level=info msg="Started container" PID=2380 containerID=821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a description=kube-system/kube-apiserver-pause-017644/kube-apiserver id=2fd003fa-6fca-4984-b850-5de842e98196 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9450d2c31dca40dcd44d2b68864b6b0a579522066584120ecdb3a1923963ec76
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.945616057Z" level=info msg="Created container 6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48: kube-system/kube-controller-manager-pause-017644/kube-controller-manager" id=5ea734d2-03fd-44da-b60c-c897dd55c82c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.946703013Z" level=info msg="Starting container: 6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48" id=462f4f69-86e5-4f4a-8e6b-bf933b8f3969 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.95141721Z" level=info msg="Started container" PID=2409 containerID=6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48 description=kube-system/kube-controller-manager-pause-017644/kube-controller-manager id=462f4f69-86e5-4f4a-8e6b-bf933b8f3969 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e19caf777e0c87d4568c09b027480020295eb6f13db62b0f42816d51cbe29402
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.980227469Z" level=info msg="Created container 8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358: kube-system/kindnet-5vj4v/kindnet-cni" id=37c1e7d3-453e-4e47-bc51-35a208f1c2df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.981237534Z" level=info msg="Starting container: 8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358" id=073e22ef-f255-4df4-9b88-1a839666f26f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 20:57:02 pause-017644 crio[2084]: time="2025-10-17T20:57:02.990833684Z" level=info msg="Started container" PID=2403 containerID=8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358 description=kube-system/kindnet-5vj4v/kindnet-cni id=073e22ef-f255-4df4-9b88-1a839666f26f name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd7db9daf42ac9c30ff21b51872bdd06ea2257a16d172b5ddf542977d837d4e8
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.452184535Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459416001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459591396Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.459666851Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.463035151Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.46536891Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.465472247Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475345266Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475386817Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.475410974Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480784904Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480824084Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.480847896Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.48761625Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 20:57:13 pause-017644 crio[2084]: time="2025-10-17T20:57:13.487660337Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	6027191e958ff       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   24 seconds ago       Running             kube-controller-manager   1                   e19caf777e0c8       kube-controller-manager-pause-017644   kube-system
	8564bb112f423       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   fd7db9daf42ac       kindnet-5vj4v                          kube-system
	821b30b913e83       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   24 seconds ago       Running             kube-apiserver            1                   9450d2c31dca4       kube-apiserver-pause-017644            kube-system
	e4588c2d818f2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   24 seconds ago       Running             coredns                   1                   c3c28a087f138       coredns-66bc5c9577-nlqlq               kube-system
	39beaf9b5b945       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   25 seconds ago       Running             etcd                      1                   28cdf41c09994       etcd-pause-017644                      kube-system
	c268ddf0b2b94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   25 seconds ago       Running             kube-scheduler            1                   ea6d14860a0cb       kube-scheduler-pause-017644            kube-system
	373484b72bb78       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   25 seconds ago       Running             kube-proxy                1                   a2a507556f039       kube-proxy-vvtk6                       kube-system
	10258541f312b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   40 seconds ago       Exited              coredns                   0                   c3c28a087f138       coredns-66bc5c9577-nlqlq               kube-system
	5e4222228ece2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   fd7db9daf42ac       kindnet-5vj4v                          kube-system
	808520cfab210       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   a2a507556f039       kube-proxy-vvtk6                       kube-system
	0f3263b7238f9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   28cdf41c09994       etcd-pause-017644                      kube-system
	0238c4af7be73       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   9450d2c31dca4       kube-apiserver-pause-017644            kube-system
	1d69fd2550709       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   ea6d14860a0cb       kube-scheduler-pause-017644            kube-system
	7ec82d0ca5a88       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   e19caf777e0c8       kube-controller-manager-pause-017644   kube-system
	
	
	==> coredns [10258541f312b34c92602b2d9f4f36cfb1fa9d0dd37dee01907ab34d3251bb4f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49853 - 43808 "HINFO IN 3211772666261551472.4706601488496372243. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031544524s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [e4588c2d818f221ef1e93467d0b01a749ec812fd578475825a95a4c045e0506f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43410 - 8553 "HINFO IN 8800095208714214793.4963916525816000312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.032961125s
	
	
	==> describe nodes <==
	Name:               pause-017644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-017644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=pause-017644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T20_56_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 20:55:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-017644
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 20:57:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:55:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 20:56:46 +0000   Fri, 17 Oct 2025 20:56:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-017644
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                d6ec9904-ec33-4178-82ec-30d1ad057cde
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-nlqlq                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     83s
	  kube-system                 etcd-pause-017644                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         88s
	  kube-system                 kindnet-5vj4v                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      83s
	  kube-system                 kube-apiserver-pause-017644             250m (12%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-controller-manager-pause-017644    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 kube-proxy-vvtk6                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  kube-system                 kube-scheduler-pause-017644             100m (5%)     0 (0%)      0 (0%)           0 (0%)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 81s                  kube-proxy       
	  Normal   Starting                 17s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node pause-017644 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node pause-017644 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node pause-017644 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 89s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 89s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  88s                  kubelet          Node pause-017644 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s                  kubelet          Node pause-017644 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s                  kubelet          Node pause-017644 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           84s                  node-controller  Node pause-017644 event: Registered Node pause-017644 in Controller
	  Normal   NodeReady                42s                  kubelet          Node pause-017644 status is now: NodeReady
	  Warning  ContainerGCFailed        29s                  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           15s                  node-controller  Node pause-017644 event: Registered Node pause-017644 in Controller
	
	
	==> dmesg <==
	[Oct17 20:24] overlayfs: idmapped layers are currently not supported
	[Oct17 20:25] overlayfs: idmapped layers are currently not supported
	[ +32.795830] overlayfs: idmapped layers are currently not supported
	[Oct17 20:33] overlayfs: idmapped layers are currently not supported
	[Oct17 20:34] overlayfs: idmapped layers are currently not supported
	[ +42.751418] overlayfs: idmapped layers are currently not supported
	[Oct17 20:35] overlayfs: idmapped layers are currently not supported
	[Oct17 20:37] overlayfs: idmapped layers are currently not supported
	[Oct17 20:42] overlayfs: idmapped layers are currently not supported
	[Oct17 20:43] overlayfs: idmapped layers are currently not supported
	[Oct17 20:44] overlayfs: idmapped layers are currently not supported
	[Oct17 20:45] overlayfs: idmapped layers are currently not supported
	[Oct17 20:46] overlayfs: idmapped layers are currently not supported
	[Oct17 20:48] overlayfs: idmapped layers are currently not supported
	[ +27.124680] overlayfs: idmapped layers are currently not supported
	[  +8.199606] overlayfs: idmapped layers are currently not supported
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0f3263b7238f9a59c3a41190c5776b39b17063b2c8e9eb60c82893eb24eb7391] <==
	{"level":"warn","ts":"2025-10-17T20:55:55.174899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.195408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.215921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.249656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.279515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.309732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:55:55.398170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53860","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-17T20:56:52.033776Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-17T20:56:52.033824Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-017644","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-10-17T20:56:52.033902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:56:52.181944Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-17T20:56:52.182048Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.182102Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182100Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182126Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:56:52.182134Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.182158Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-10-17T20:56:52.182168Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182212Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-17T20:56:52.182236Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-17T20:56:52.182243Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.185326Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-10-17T20:56:52.185406Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-17T20:56:52.185436Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T20:56:52.185446Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-017644","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> etcd [39beaf9b5b94581c4c22a5b4c5265d869d8c7302867f3a5aa4f51aefc7863cb1] <==
	{"level":"warn","ts":"2025-10-17T20:57:08.001852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.037944Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.059681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.103507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.153315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.162704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.195487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.216105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.240461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.317552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.321511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.340418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.372182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.386579Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.419243Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.442029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.471712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.505493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.518477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.545996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.576411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.635334Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.652058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.715765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T20:57:08.812715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35318","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:57:28 up  3:39,  0 user,  load average: 4.33, 2.70, 1.99
	Linux pause-017644 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2] <==
	I1017 20:56:06.142683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:56:06.143274       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:56:06.143454       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:56:06.143497       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:56:06.143537       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:56:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:56:06.330483       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:56:06.330560       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:56:06.330596       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:56:06.331399       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 20:56:36.330624       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 20:56:36.331998       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 20:56:36.332153       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 20:56:36.332230       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 20:56:37.531663       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:56:37.531694       1 metrics.go:72] Registering metrics
	I1017 20:56:37.531747       1 controller.go:711] "Syncing nftables rules"
	I1017 20:56:46.331736       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:56:46.331859       1 main.go:301] handling current node
	
	
	==> kindnet [8564bb112f42323be06008fa5d14da3bb13ab28a7ce3b59eaae83b9b5459f358] <==
	I1017 20:57:03.137029       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 20:57:03.137285       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 20:57:03.137413       1 main.go:148] setting mtu 1500 for CNI 
	I1017 20:57:03.137433       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 20:57:03.137447       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T20:57:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 20:57:03.451780       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 20:57:03.451807       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 20:57:03.451816       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 20:57:03.452513       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 20:57:10.652389       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 20:57:10.652429       1 metrics.go:72] Registering metrics
	I1017 20:57:10.652510       1 controller.go:711] "Syncing nftables rules"
	I1017 20:57:13.451744       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:57:13.451833       1 main.go:301] handling current node
	I1017 20:57:23.452255       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 20:57:23.452290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0238c4af7be73aaf354168c6b87adc38b28c15dee8aeca2683059220e283aa06] <==
	W1017 20:56:52.045724       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046131       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046241       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046331       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046496       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.046544       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051540       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051630       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051695       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051752       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051807       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051853       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.051911       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052273       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052399       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052489       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052595       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052692       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052781       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052870       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.052958       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053478       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053604       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053812       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1017 20:56:52.053968       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [821b30b913e83cecd5ac7d16302aabe05ab80d479760e891d60fd31ecad81f8a] <==
	I1017 20:57:10.605271       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 20:57:10.615378       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 20:57:10.615405       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 20:57:10.615966       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 20:57:10.615995       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 20:57:10.616075       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 20:57:10.616247       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1017 20:57:10.616280       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 20:57:10.616312       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 20:57:10.647390       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 20:57:10.647517       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1017 20:57:10.647542       1 policy_source.go:240] refreshing policies
	I1017 20:57:10.647723       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 20:57:10.648626       1 aggregator.go:171] initial CRD sync complete...
	I1017 20:57:10.648652       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 20:57:10.648659       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 20:57:10.648667       1 cache.go:39] Caches are synced for autoregister controller
	I1017 20:57:10.667438       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1017 20:57:10.683897       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 20:57:10.912344       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 20:57:12.424008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 20:57:13.933813       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 20:57:13.981594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 20:57:14.033333       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 20:57:14.144671       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [6027191e958ffd8341c6d0c0c61afe9972a89f2359f78316c46bb6effc0bfe48] <==
	I1017 20:57:13.674235       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:57:13.674249       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 20:57:13.674805       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:57:13.674817       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:57:13.674826       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 20:57:13.680995       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:57:13.682392       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 20:57:13.683606       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 20:57:13.683701       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 20:57:13.683771       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 20:57:13.683801       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 20:57:13.683840       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 20:57:13.684181       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:57:13.686475       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 20:57:13.686641       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:57:13.686685       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:57:13.699412       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:57:13.711539       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 20:57:13.712508       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 20:57:13.712587       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 20:57:13.723903       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:57:13.731245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:57:13.731345       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 20:57:13.731376       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 20:57:13.775204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab] <==
	I1017 20:56:04.297752       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 20:56:04.302669       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 20:56:04.302765       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 20:56:04.302867       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 20:56:04.303928       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-017644" podCIDRs=["10.244.0.0/24"]
	I1017 20:56:04.304105       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 20:56:04.304156       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 20:56:04.304683       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-017644"
	I1017 20:56:04.304746       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1017 20:56:04.308921       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 20:56:04.314088       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 20:56:04.314175       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 20:56:04.315271       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 20:56:04.325488       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 20:56:04.328748       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 20:56:04.334397       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 20:56:04.334635       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 20:56:04.334502       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 20:56:04.335517       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 20:56:04.335794       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 20:56:04.335808       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 20:56:04.335853       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 20:56:04.338561       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1017 20:56:04.356411       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 20:56:49.309791       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [373484b72bb7845cd306a5fa0d620be4921520c5c6b3983f65c2023b00b71a12] <==
	I1017 20:57:05.750744       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:57:07.927835       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:57:10.647212       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:57:10.651227       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:57:10.653401       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:57:10.795797       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:57:10.795914       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:57:10.811387       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:57:10.811761       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:57:10.811961       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:57:10.813225       1 config.go:200] "Starting service config controller"
	I1017 20:57:10.818608       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:57:10.818679       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:57:10.818709       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:57:10.818760       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:57:10.818798       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:57:10.819549       1 config.go:309] "Starting node config controller"
	I1017 20:57:10.819613       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:57:10.819644       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:57:10.923841       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:57:10.923956       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:57:10.923976       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [808520cfab210f174320eb62d3cdcf72e13177b466c615d8fd7b6017089349a9] <==
	I1017 20:56:06.155285       1 server_linux.go:53] "Using iptables proxy"
	I1017 20:56:06.273557       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 20:56:06.375197       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 20:56:06.375237       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 20:56:06.375317       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 20:56:06.459506       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 20:56:06.459625       1 server_linux.go:132] "Using iptables Proxier"
	I1017 20:56:06.464228       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 20:56:06.464586       1 server.go:527] "Version info" version="v1.34.1"
	I1017 20:56:06.464603       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:56:06.465873       1 config.go:200] "Starting service config controller"
	I1017 20:56:06.465894       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 20:56:06.465913       1 config.go:106] "Starting endpoint slice config controller"
	I1017 20:56:06.465922       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 20:56:06.465936       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 20:56:06.465940       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 20:56:06.467756       1 config.go:309] "Starting node config controller"
	I1017 20:56:06.467837       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 20:56:06.467870       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 20:56:06.566833       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 20:56:06.566839       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 20:56:06.566872       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1d69fd2550709eee89ebc68b6b86c969faefeff7c0ef8318f625801033255b22] <==
	E1017 20:55:56.584819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 20:55:56.584959       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 20:55:56.585069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 20:55:56.585180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 20:55:56.585211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 20:55:56.585905       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 20:55:56.586269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 20:55:56.596006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:55:57.417140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 20:55:57.581518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 20:55:57.588695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 20:55:57.652800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 20:55:57.652975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 20:55:57.684396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 20:55:57.725148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 20:55:57.743247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 20:55:57.747050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 20:55:57.848273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1017 20:56:00.931829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:56:52.032911       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1017 20:56:52.033094       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1017 20:56:52.033110       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1017 20:56:52.033132       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:56:52.033414       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1017 20:56:52.033434       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [c268ddf0b2b94c608598cd0199abd15831b30c4a2f73dfd4b02b7b0a72ae2d80] <==
	I1017 20:57:07.625114       1 serving.go:386] Generated self-signed cert in-memory
	I1017 20:57:10.743852       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 20:57:10.743890       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 20:57:10.762402       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 20:57:10.763189       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 20:57:10.763245       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 20:57:10.799449       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 20:57:10.763266       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.799594       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.763257       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:57:10.800214       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 20:57:10.899969       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 20:57:10.900432       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 20:57:11.003194       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.538473    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.581205    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="33f2ad75e8aeec3fb0cbd2f5adf31d50" pod="kube-system/kube-apiserver-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: I1017 20:57:02.581309    1299 scope.go:117] "RemoveContainer" containerID="7ec82d0ca5a888312549eadfeca4f323a6b36f63b0c30b5c8eed0046dad162ab"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582029    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="82e2b57fb25a75d2e88b2eca31dd4bf0" pod="kube-system/etcd-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582489    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vvtk6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.582970    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nlqlq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.583596    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b609a7cc9a16ba87539db3711f84efa1" pod="kube-system/kube-controller-manager-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.583979    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: I1017 20:57:02.587092    1299 scope.go:117] "RemoveContainer" containerID="5e4222228ece2b0567761f90761a22fb1e13e43cb15708eef765e98dbeb89fb2"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.587983    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="b609a7cc9a16ba87539db3711f84efa1" pod="kube-system/kube-controller-manager-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.588445    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="434860e496a5f2e3cee22e6204bc64c0" pod="kube-system/kube-scheduler-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.588792    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="33f2ad75e8aeec3fb0cbd2f5adf31d50" pod="kube-system/kube-apiserver-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.589220    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-017644\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="82e2b57fb25a75d2e88b2eca31dd4bf0" pod="kube-system/etcd-pause-017644"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.589632    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-vvtk6\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.590040    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/kindnet-5vj4v\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="51e113cd-b1a8-4675-9b2d-28040c57ba2b" pod="kube-system/kindnet-5vj4v"
	Oct 17 20:57:02 pause-017644 kubelet[1299]: E1017 20:57:02.590425    1299 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.76.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-nlqlq\": dial tcp 192.168.76.2:8443: connect: connection refused" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.081530    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-vvtk6\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="fb35942b-0907-47de-86f1-596ba6bc6baf" pod="kube-system/kube-proxy-vvtk6"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.263413    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-5vj4v\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="51e113cd-b1a8-4675-9b2d-28040c57ba2b" pod="kube-system/kindnet-5vj4v"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.403344    1299 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-017644\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: E1017 20:57:10.403446    1299 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-nlqlq\" is forbidden: User \"system:node:pause-017644\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-017644' and this object" podUID="495df891-91df-4a9c-934d-765858e25995" pod="kube-system/coredns-66bc5c9577-nlqlq"
	Oct 17 20:57:10 pause-017644 kubelet[1299]: W1017 20:57:10.623740    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 20:57:20 pause-017644 kubelet[1299]: W1017 20:57:20.642184    1299 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Oct 17 20:57:21 pause-017644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 20:57:21 pause-017644 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 20:57:21 pause-017644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-017644 -n pause-017644
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-017644 -n pause-017644: exit status 2 (550.865269ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-017644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (9.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (329.968519ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:12:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-521710 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-521710 describe deploy/metrics-server -n kube-system: exit status 1 (102.482488ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-521710 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-521710
helpers_test.go:243: (dbg) docker inspect old-k8s-version-521710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	        "Created": "2025-10-17T21:11:18.645427357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 801219,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:11:18.704934696Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hostname",
	        "HostsPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hosts",
	        "LogPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77-json.log",
	        "Name": "/old-k8s-version-521710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-521710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-521710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	                "LowerDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-521710",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-521710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-521710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "83acab87e17281492134c2e8556c880ccf59a69ab94bc21160fe2b2445dccf75",
	            "SandboxKey": "/var/run/docker/netns/83acab87e172",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33822"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33823"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33826"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33824"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33825"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-521710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:29:0a:a0:be:16",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0dbd01eef2ecf0cfa290a0ca03fecc2259469a874644e9e5b874fbcdc1b5668f",
	                    "EndpointID": "e11c824f298a88c9485a8848273cbb2f4fd65aed3c6369a46e3005e8ea7c2e72",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-521710",
	                        "35a78dd09101"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25: (1.430461882s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat kubelet --no-pager                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status docker --all --full --no-pager                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat docker --no-pager                                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/docker/daemon.json                                                                                                       │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo docker system info                                                                                                                │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat cri-docker --no-pager                                                                                               │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cri-dockerd --version                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status containerd --all --full --no-pager                                                                               │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                               │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                        │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                            │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                       │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                        │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain            │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:12:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:12:16.945774  806397 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:12:16.945944  806397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:12:16.945967  806397 out.go:374] Setting ErrFile to fd 2...
	I1017 21:12:16.945996  806397 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:12:16.946286  806397 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:12:16.946869  806397 out.go:368] Setting JSON to false
	I1017 21:12:16.947905  806397 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14083,"bootTime":1760721454,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:12:16.948015  806397 start.go:141] virtualization:  
	I1017 21:12:16.952014  806397 out.go:179] * [no-preload-820018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:12:16.956157  806397 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:12:16.956231  806397 notify.go:220] Checking for updates...
	I1017 21:12:16.962816  806397 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:12:16.965907  806397 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:12:16.969036  806397 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:12:16.972202  806397 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:12:16.975264  806397 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:12:16.978791  806397 config.go:182] Loaded profile config "old-k8s-version-521710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 21:12:16.978904  806397 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:12:17.000776  806397 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:12:17.000907  806397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:12:17.069839  806397 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:12:17.060209828 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:12:17.069940  806397 docker.go:318] overlay module found
	I1017 21:12:17.073135  806397 out.go:179] * Using the docker driver based on user configuration
	I1017 21:12:17.076061  806397 start.go:305] selected driver: docker
	I1017 21:12:17.076099  806397 start.go:925] validating driver "docker" against <nil>
	I1017 21:12:17.076114  806397 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:12:17.076864  806397 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:12:17.130914  806397 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:12:17.121780153 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:12:17.131093  806397 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 21:12:17.131431  806397 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:12:17.134572  806397 out.go:179] * Using Docker driver with root privileges
	I1017 21:12:17.137515  806397 cni.go:84] Creating CNI manager for ""
	I1017 21:12:17.137581  806397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:12:17.137594  806397 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:12:17.137674  806397 start.go:349] cluster config:
	{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:12:17.140803  806397 out.go:179] * Starting "no-preload-820018" primary control-plane node in "no-preload-820018" cluster
	I1017 21:12:17.143644  806397 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:12:17.146566  806397 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:12:17.149383  806397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:12:17.149533  806397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:12:17.149576  806397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json: {Name:mkdd3598b293750b8aeb857fab0f823966b3aecc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:12:17.149785  806397 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:12:17.150055  806397 cache.go:107] acquiring lock: {Name:mk40b757c19c3c9274f9f5d80ab21002ed44c3fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.150122  806397 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 21:12:17.150135  806397 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 90.283µs
	I1017 21:12:17.150143  806397 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 21:12:17.150161  806397 cache.go:107] acquiring lock: {Name:mkab9c4a8cb8e1bf28dffee17f9a3ed781aeb58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.150241  806397 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:17.150552  806397 cache.go:107] acquiring lock: {Name:mkb0f531469cc497e90953411691aebfea202dba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.150651  806397 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:17.150878  806397 cache.go:107] acquiring lock: {Name:mkc7975906f97cc89b61c851770f9e445c0bd241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.150970  806397 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:17.151236  806397 cache.go:107] acquiring lock: {Name:mk7d4188cf80de21ea7a2f21ef7ea3cdd3e61d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.151357  806397 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:17.151564  806397 cache.go:107] acquiring lock: {Name:mkc7f366c6bc39751a468519a3c4e03edbde6c9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.151670  806397 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1017 21:12:17.151872  806397 cache.go:107] acquiring lock: {Name:mk31f5a4c7a30c2888716a3df14a08c66478a7b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.151961  806397 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:17.152169  806397 cache.go:107] acquiring lock: {Name:mkb38536a5bb91d51d50b4384af5536a1bee04d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.152265  806397 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:17.155968  806397 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:17.156569  806397 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:17.156759  806397 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:17.156888  806397 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1017 21:12:17.157014  806397 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:17.157138  806397 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:17.157364  806397 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:17.175362  806397 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:12:17.175384  806397 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:12:17.175402  806397 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:12:17.175424  806397 start.go:360] acquireMachinesLock for no-preload-820018: {Name:mk60df73c299cbe0a2eb1abd2d4c927199ea7cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:17.175574  806397 start.go:364] duration metric: took 128.749µs to acquireMachinesLock for "no-preload-820018"
	I1017 21:12:17.175608  806397 start.go:93] Provisioning new machine with config: &{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:12:17.175686  806397 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:12:17.764249  800785 node_ready.go:57] node "old-k8s-version-521710" has "Ready":"False" status (will retry)
	I1017 21:12:19.263964  800785 node_ready.go:49] node "old-k8s-version-521710" is "Ready"
	I1017 21:12:19.263997  800785 node_ready.go:38] duration metric: took 15.003582246s for node "old-k8s-version-521710" to be "Ready" ...
	I1017 21:12:19.264010  800785 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:12:19.264066  800785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:12:19.365605  800785 api_server.go:72] duration metric: took 17.897506383s to wait for apiserver process to appear ...
	I1017 21:12:19.365628  800785 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:12:19.365647  800785 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:12:19.382332  800785 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:12:19.387332  800785 api_server.go:141] control plane version: v1.28.0
	I1017 21:12:19.387360  800785 api_server.go:131] duration metric: took 21.725452ms to wait for apiserver health ...
	I1017 21:12:19.387370  800785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:12:19.400760  800785 system_pods.go:59] 8 kube-system pods found
	I1017 21:12:19.400818  800785 system_pods.go:61] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:12:19.400828  800785 system_pods.go:61] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running
	I1017 21:12:19.400834  800785 system_pods.go:61] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:12:19.400839  800785 system_pods.go:61] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running
	I1017 21:12:19.400844  800785 system_pods.go:61] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running
	I1017 21:12:19.400854  800785 system_pods.go:61] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:12:19.400868  800785 system_pods.go:61] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running
	I1017 21:12:19.400875  800785 system_pods.go:61] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:12:19.400885  800785 system_pods.go:74] duration metric: took 13.509445ms to wait for pod list to return data ...
	I1017 21:12:19.400901  800785 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:12:19.415342  800785 default_sa.go:45] found service account: "default"
	I1017 21:12:19.415425  800785 default_sa.go:55] duration metric: took 14.51723ms for default service account to be created ...
	I1017 21:12:19.415450  800785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:12:19.421031  800785 system_pods.go:86] 8 kube-system pods found
	I1017 21:12:19.421119  800785 system_pods.go:89] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:12:19.421143  800785 system_pods.go:89] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running
	I1017 21:12:19.421163  800785 system_pods.go:89] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:12:19.421193  800785 system_pods.go:89] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running
	I1017 21:12:19.421215  800785 system_pods.go:89] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running
	I1017 21:12:19.421235  800785 system_pods.go:89] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:12:19.421264  800785 system_pods.go:89] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running
	I1017 21:12:19.421299  800785 system_pods.go:89] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:12:19.421347  800785 retry.go:31] will retry after 238.549436ms: missing components: kube-dns
	I1017 21:12:19.694071  800785 system_pods.go:86] 8 kube-system pods found
	I1017 21:12:19.694152  800785 system_pods.go:89] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:12:19.694174  800785 system_pods.go:89] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running
	I1017 21:12:19.694212  800785 system_pods.go:89] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:12:19.694230  800785 system_pods.go:89] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running
	I1017 21:12:19.694621  800785 system_pods.go:89] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running
	I1017 21:12:19.694641  800785 system_pods.go:89] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:12:19.694661  800785 system_pods.go:89] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running
	I1017 21:12:19.694703  800785 system_pods.go:89] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:12:19.694742  800785 retry.go:31] will retry after 301.462958ms: missing components: kube-dns
	I1017 21:12:20.001674  800785 system_pods.go:86] 8 kube-system pods found
	I1017 21:12:20.001706  800785 system_pods.go:89] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:12:20.001713  800785 system_pods.go:89] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running
	I1017 21:12:20.001719  800785 system_pods.go:89] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:12:20.001723  800785 system_pods.go:89] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running
	I1017 21:12:20.001728  800785 system_pods.go:89] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running
	I1017 21:12:20.001732  800785 system_pods.go:89] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:12:20.001736  800785 system_pods.go:89] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running
	I1017 21:12:20.001740  800785 system_pods.go:89] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Running
	I1017 21:12:20.001747  800785 system_pods.go:126] duration metric: took 586.279046ms to wait for k8s-apps to be running ...
	I1017 21:12:20.001755  800785 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:12:20.001813  800785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:12:20.025322  800785 system_svc.go:56] duration metric: took 23.555369ms WaitForService to wait for kubelet
	I1017 21:12:20.025348  800785 kubeadm.go:586] duration metric: took 18.55725401s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:12:20.025368  800785 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:12:20.029089  800785 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:12:20.029172  800785 node_conditions.go:123] node cpu capacity is 2
	I1017 21:12:20.029209  800785 node_conditions.go:105] duration metric: took 3.834481ms to run NodePressure ...
	I1017 21:12:20.029234  800785 start.go:241] waiting for startup goroutines ...
	I1017 21:12:20.029255  800785 start.go:246] waiting for cluster config update ...
	I1017 21:12:20.029293  800785 start.go:255] writing updated cluster config ...
	I1017 21:12:20.029649  800785 ssh_runner.go:195] Run: rm -f paused
	I1017 21:12:20.035905  800785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:12:20.043550  800785 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vbl7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.049738  800785 pod_ready.go:94] pod "coredns-5dd5756b68-vbl7d" is "Ready"
	I1017 21:12:21.049769  800785 pod_ready.go:86] duration metric: took 1.006195234s for pod "coredns-5dd5756b68-vbl7d" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.053684  800785 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.059583  800785 pod_ready.go:94] pod "etcd-old-k8s-version-521710" is "Ready"
	I1017 21:12:21.059604  800785 pod_ready.go:86] duration metric: took 5.899577ms for pod "etcd-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.063242  800785 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.069321  800785 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-521710" is "Ready"
	I1017 21:12:21.069343  800785 pod_ready.go:86] duration metric: took 6.079895ms for pod "kube-apiserver-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.073171  800785 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.248099  800785 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-521710" is "Ready"
	I1017 21:12:21.248174  800785 pod_ready.go:86] duration metric: took 174.930674ms for pod "kube-controller-manager-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:21.448608  800785 pod_ready.go:83] waiting for pod "kube-proxy-dz7dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:17.181266  806397 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:12:17.181519  806397 start.go:159] libmachine.API.Create for "no-preload-820018" (driver="docker")
	I1017 21:12:17.181569  806397 client.go:168] LocalClient.Create starting
	I1017 21:12:17.181651  806397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:12:17.181688  806397 main.go:141] libmachine: Decoding PEM data...
	I1017 21:12:17.181712  806397 main.go:141] libmachine: Parsing certificate...
	I1017 21:12:17.181774  806397 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:12:17.181799  806397 main.go:141] libmachine: Decoding PEM data...
	I1017 21:12:17.181813  806397 main.go:141] libmachine: Parsing certificate...
	I1017 21:12:17.182205  806397 cli_runner.go:164] Run: docker network inspect no-preload-820018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:12:17.205961  806397 cli_runner.go:211] docker network inspect no-preload-820018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:12:17.206045  806397 network_create.go:284] running [docker network inspect no-preload-820018] to gather additional debugging logs...
	I1017 21:12:17.206065  806397 cli_runner.go:164] Run: docker network inspect no-preload-820018
	W1017 21:12:17.221990  806397 cli_runner.go:211] docker network inspect no-preload-820018 returned with exit code 1
	I1017 21:12:17.222024  806397 network_create.go:287] error running [docker network inspect no-preload-820018]: docker network inspect no-preload-820018: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-820018 not found
	I1017 21:12:17.222038  806397 network_create.go:289] output of [docker network inspect no-preload-820018]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-820018 not found
	
	** /stderr **
	I1017 21:12:17.222139  806397 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:12:17.239362  806397 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:12:17.239805  806397 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:12:17.240151  806397 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:12:17.240471  806397 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0dbd01eef2ec IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:ee:27:07:e6:ba} reservation:<nil>}
	I1017 21:12:17.240915  806397 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bd3de0}
	I1017 21:12:17.240936  806397 network_create.go:124] attempt to create docker network no-preload-820018 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 21:12:17.241030  806397 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-820018 no-preload-820018
	I1017 21:12:17.311461  806397 network_create.go:108] docker network no-preload-820018 192.168.85.0/24 created
	I1017 21:12:17.311506  806397 kic.go:121] calculated static IP "192.168.85.2" for the "no-preload-820018" container
	I1017 21:12:17.311579  806397 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:12:17.344556  806397 cli_runner.go:164] Run: docker volume create no-preload-820018 --label name.minikube.sigs.k8s.io=no-preload-820018 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:12:17.363379  806397 oci.go:103] Successfully created a docker volume no-preload-820018
	I1017 21:12:17.363474  806397 cli_runner.go:164] Run: docker run --rm --name no-preload-820018-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820018 --entrypoint /usr/bin/test -v no-preload-820018:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:12:17.494519  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1017 21:12:17.514928  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1017 21:12:17.531843  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1017 21:12:17.534242  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1017 21:12:17.545481  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1017 21:12:17.555567  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1017 21:12:17.557536  806397 cache.go:162] opening:  /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1017 21:12:17.617322  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1017 21:12:17.617400  806397 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 465.837254ms
	I1017 21:12:17.617425  806397 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 21:12:18.003681  806397 oci.go:107] Successfully prepared a docker volume no-preload-820018
	I1017 21:12:18.003723  806397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1017 21:12:18.003875  806397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:12:18.003982  806397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:12:18.089607  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 21:12:18.089629  806397 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 938.397187ms
	I1017 21:12:18.089643  806397 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 21:12:18.090730  806397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-820018 --name no-preload-820018 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-820018 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-820018 --network no-preload-820018 --ip 192.168.85.2 --volume no-preload-820018:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:12:18.513341  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Running}}
	I1017 21:12:18.551376  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 21:12:18.551408  806397 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.400532787s
	I1017 21:12:18.551420  806397 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 21:12:18.570924  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 21:12:18.580995  806397 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.428818723s
	I1017 21:12:18.582721  806397 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 21:12:18.623025  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:12:18.702335  806397 cli_runner.go:164] Run: docker exec no-preload-820018 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:12:18.748041  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 21:12:18.748065  806397 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.597516826s
	I1017 21:12:18.748082  806397 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 21:12:18.772018  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 21:12:18.772049  806397 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.62188778s
	I1017 21:12:18.772061  806397 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 21:12:18.809655  806397 oci.go:144] the created container "no-preload-820018" has a running status.
	I1017 21:12:18.809681  806397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa...
	I1017 21:12:19.774442  806397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:12:19.801849  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:12:19.845935  806397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:12:19.845960  806397 kic_runner.go:114] Args: [docker exec --privileged no-preload-820018 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:12:19.913689  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:12:19.935316  806397 machine.go:93] provisionDockerMachine start ...
	I1017 21:12:19.935417  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:19.960495  806397 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:19.960842  806397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1017 21:12:19.960858  806397 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:12:20.187894  806397 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:12:20.187920  806397 ubuntu.go:182] provisioning hostname "no-preload-820018"
	I1017 21:12:20.187985  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:20.230021  806397 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:20.230520  806397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1017 21:12:20.230565  806397 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820018 && echo "no-preload-820018" | sudo tee /etc/hostname
	I1017 21:12:20.242370  806397 cache.go:157] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 21:12:20.242400  806397 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 3.090530954s
	I1017 21:12:20.242412  806397 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 21:12:20.242444  806397 cache.go:87] Successfully saved all images to host disk.
	I1017 21:12:20.405105  806397 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:12:20.405185  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:20.426726  806397 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:20.427031  806397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1017 21:12:20.427053  806397 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820018/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:12:20.575436  806397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:12:20.575465  806397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:12:20.575484  806397 ubuntu.go:190] setting up certificates
	I1017 21:12:20.575494  806397 provision.go:84] configureAuth start
	I1017 21:12:20.575555  806397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:12:20.592937  806397 provision.go:143] copyHostCerts
	I1017 21:12:20.593008  806397 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:12:20.593025  806397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:12:20.593150  806397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:12:20.593291  806397 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:12:20.593320  806397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:12:20.593356  806397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:12:20.593450  806397 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:12:20.593475  806397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:12:20.593506  806397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:12:20.593597  806397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.no-preload-820018 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820018]
	I1017 21:12:21.847685  800785 pod_ready.go:94] pod "kube-proxy-dz7dm" is "Ready"
	I1017 21:12:21.847708  800785 pod_ready.go:86] duration metric: took 399.027989ms for pod "kube-proxy-dz7dm" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:22.048438  800785 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:22.448093  800785 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-521710" is "Ready"
	I1017 21:12:22.448114  800785 pod_ready.go:86] duration metric: took 399.645729ms for pod "kube-scheduler-old-k8s-version-521710" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:12:22.448126  800785 pod_ready.go:40] duration metric: took 2.41218961s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:12:22.524688  800785 start.go:624] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1017 21:12:22.528093  800785 out.go:203] 
	W1017 21:12:22.530973  800785 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1017 21:12:22.533926  800785 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1017 21:12:22.537733  800785 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-521710" cluster and "default" namespace by default
	I1017 21:12:22.287160  806397 provision.go:177] copyRemoteCerts
	I1017 21:12:22.287231  806397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:12:22.287278  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:22.304884  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:22.406824  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:12:22.424193  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:12:22.443263  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:12:22.465113  806397 provision.go:87] duration metric: took 1.889603954s to configureAuth
	I1017 21:12:22.465140  806397 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:12:22.465322  806397 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:12:22.465433  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:22.492853  806397 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:22.493178  806397 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33827 <nil> <nil>}
	I1017 21:12:22.493195  806397 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:12:22.813104  806397 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:12:22.813124  806397 machine.go:96] duration metric: took 2.877783161s to provisionDockerMachine
	I1017 21:12:22.813135  806397 client.go:171] duration metric: took 5.63155511s to LocalClient.Create
	I1017 21:12:22.813148  806397 start.go:167] duration metric: took 5.631630656s to libmachine.API.Create "no-preload-820018"
	I1017 21:12:22.813155  806397 start.go:293] postStartSetup for "no-preload-820018" (driver="docker")
	I1017 21:12:22.813165  806397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:12:22.813231  806397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:12:22.813270  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:22.834383  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:22.939513  806397 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:12:22.942931  806397 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:12:22.943006  806397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:12:22.943032  806397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:12:22.943098  806397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:12:22.943196  806397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:12:22.943306  806397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:12:22.951153  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:12:22.969909  806397 start.go:296] duration metric: took 156.739244ms for postStartSetup
	I1017 21:12:22.970326  806397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:12:22.987856  806397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:12:22.988153  806397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:12:22.988209  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:23.019168  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:23.120436  806397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:12:23.125762  806397 start.go:128] duration metric: took 5.950058701s to createHost
	I1017 21:12:23.125785  806397 start.go:83] releasing machines lock for "no-preload-820018", held for 5.950194554s
	I1017 21:12:23.125862  806397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:12:23.143583  806397 ssh_runner.go:195] Run: cat /version.json
	I1017 21:12:23.143636  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:23.143708  806397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:12:23.143777  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:23.171787  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:23.179735  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:23.275440  806397 ssh_runner.go:195] Run: systemctl --version
	I1017 21:12:23.368973  806397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:12:23.412946  806397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:12:23.417254  806397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:12:23.417355  806397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:12:23.446353  806397 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:12:23.446390  806397 start.go:495] detecting cgroup driver to use...
	I1017 21:12:23.446425  806397 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:12:23.446491  806397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:12:23.465126  806397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:12:23.479095  806397 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:12:23.479229  806397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:12:23.496784  806397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:12:23.519618  806397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:12:23.654073  806397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:12:23.783770  806397 docker.go:234] disabling docker service ...
	I1017 21:12:23.783836  806397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:12:23.815681  806397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:12:23.842601  806397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:12:24.017609  806397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:12:24.176328  806397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:12:24.189962  806397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:12:24.204173  806397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:12:24.204238  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.214234  806397 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:12:24.214303  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.223572  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.232564  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.242014  806397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:12:24.250381  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.259895  806397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.275082  806397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:24.285149  806397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:12:24.292850  806397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:12:24.300389  806397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:12:24.419570  806397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:12:24.539647  806397 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:12:24.539748  806397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:12:24.544154  806397 start.go:563] Will wait 60s for crictl version
	I1017 21:12:24.544241  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:24.547990  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:12:24.574803  806397 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:12:24.574913  806397 ssh_runner.go:195] Run: crio --version
	I1017 21:12:24.604337  806397 ssh_runner.go:195] Run: crio --version
	I1017 21:12:24.638279  806397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:12:24.641203  806397 cli_runner.go:164] Run: docker network inspect no-preload-820018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:12:24.657594  806397 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:12:24.661496  806397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:12:24.670827  806397 kubeadm.go:883] updating cluster {Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:12:24.670939  806397 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:12:24.670985  806397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:12:24.698418  806397 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1017 21:12:24.698444  806397 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1017 21:12:24.698500  806397 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:24.698711  806397 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:24.698815  806397 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:24.698900  806397 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:24.698985  806397 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:24.699070  806397 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1017 21:12:24.699206  806397 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:24.699307  806397 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:24.700906  806397 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:24.700915  806397 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:24.700972  806397 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1017 21:12:24.701014  806397 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:24.701129  806397 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:24.701258  806397 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:24.701332  806397 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:24.701392  806397 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:24.984476  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:25.006547  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:25.024558  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1017 21:12:25.050291  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:25.064630  806397 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1017 21:12:25.064671  806397 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:25.064726  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.071533  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:25.089970  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:25.095573  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:25.141360  806397 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1017 21:12:25.141417  806397 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:25.141477  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.209467  806397 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1017 21:12:25.209575  806397 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1017 21:12:25.209650  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.275854  806397 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1017 21:12:25.275909  806397 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:25.275989  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.276098  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:25.276196  806397 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1017 21:12:25.276239  806397 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:25.276271  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.283847  806397 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1017 21:12:25.283995  806397 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:25.284044  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.284074  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 21:12:25.284043  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:25.283965  806397 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1017 21:12:25.284114  806397 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:25.284137  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:25.331819  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:25.331894  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:25.332048  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:25.352347  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:25.352419  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:25.352475  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 21:12:25.352526  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:25.419417  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1017 21:12:25.419530  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:25.419621  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:25.488014  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1017 21:12:25.488155  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:25.488271  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:25.488381  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1017 21:12:25.536473  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1017 21:12:25.536625  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 21:12:25.536763  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1017 21:12:25.536856  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1017 21:12:25.590673  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1017 21:12:25.590835  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1017 21:12:25.590955  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1017 21:12:25.591052  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1017 21:12:25.591175  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1017 21:12:25.591262  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1017 21:12:25.632477  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1017 21:12:25.632531  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1017 21:12:25.632594  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1017 21:12:25.632719  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1017 21:12:25.632795  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1017 21:12:25.632901  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 21:12:25.681649  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1017 21:12:25.681760  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1017 21:12:25.681871  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1017 21:12:25.681993  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1017 21:12:25.682083  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1017 21:12:25.682125  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1017 21:12:25.682192  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1017 21:12:25.682279  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1017 21:12:25.690104  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1017 21:12:25.690192  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1017 21:12:25.690271  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1017 21:12:25.690311  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	W1017 21:12:25.706108  806397 ssh_runner.go:129] session error, resetting client: ssh: rejected: connect failed (open failed)
	I1017 21:12:25.706192  806397 retry.go:31] will retry after 355.983196ms: ssh: rejected: connect failed (open failed)
	I1017 21:12:25.737408  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1017 21:12:25.737460  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1017 21:12:25.737546  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:25.737805  806397 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1017 21:12:25.737843  806397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1017 21:12:25.737890  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:25.790091  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:25.791870  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:12:25.839324  806397 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1017 21:12:25.839397  806397 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1017 21:12:25.839464  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:12:25.869595  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	W1017 21:12:26.192595  806397 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1017 21:12:26.192805  806397 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:26.555930  806397 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1017 21:12:26.556021  806397 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:26.556109  806397 ssh_runner.go:195] Run: which crictl
	I1017 21:12:26.556192  806397 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1017 21:12:26.556238  806397 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1017 21:12:26.556288  806397 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1017 21:12:28.702118  806397 ssh_runner.go:235] Completed: which crictl: (2.14596794s)
	I1017 21:12:28.702202  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:28.702303  806397 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (2.145986205s)
	I1017 21:12:28.702320  806397 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1017 21:12:28.702341  806397 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 21:12:28.702375  806397 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1017 21:12:28.738720  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:30.559969  806397 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.857570071s)
	I1017 21:12:30.560070  806397 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1017 21:12:30.560111  806397 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 21:12:30.560198  806397 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1017 21:12:30.560004  806397 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.821247429s)
	I1017 21:12:30.560312  806397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:12:30.591930  806397 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1017 21:12:30.592061  806397 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	
	
	==> CRI-O <==
	Oct 17 21:12:19 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:19.566369454Z" level=info msg="Created container a2d31822a9ce4a6170804202d80d777899a025b376100cf8ac30cdbce6f60a39: kube-system/coredns-5dd5756b68-vbl7d/coredns" id=23505b5c-9a6e-4567-8bd2-470ef1797456 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:12:19 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:19.571309781Z" level=info msg="Starting container: a2d31822a9ce4a6170804202d80d777899a025b376100cf8ac30cdbce6f60a39" id=e4e4fe27-0eea-41ae-9199-68424227b95f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:12:19 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:19.575700528Z" level=info msg="Started container" PID=1958 containerID=a2d31822a9ce4a6170804202d80d777899a025b376100cf8ac30cdbce6f60a39 description=kube-system/coredns-5dd5756b68-vbl7d/coredns id=e4e4fe27-0eea-41ae-9199-68424227b95f name=/runtime.v1.RuntimeService/StartContainer sandboxID=377e87d4b907ab28d84a5edcb5a84a30ed44ba74ed0cec7c7f31feef251d5e00
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.087508378Z" level=info msg="Running pod sandbox: default/busybox/POD" id=01c8e45b-48b5-4d15-9dd5-b542b90e5e82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.087580264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.092887119Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0663c80b8341ef8aaa6b1b6d5ea68a5dcb6b1af6cea779ebe0cbf5f189961e32 UID:67434d41-b1c0-448a-865e-0a81da0dde6b NetNS:/var/run/netns/536ae240-4a3a-4582-aa9b-13a0105615a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400251edb0}] Aliases:map[]}"
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.093057189Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.102841094Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:0663c80b8341ef8aaa6b1b6d5ea68a5dcb6b1af6cea779ebe0cbf5f189961e32 UID:67434d41-b1c0-448a-865e-0a81da0dde6b NetNS:/var/run/netns/536ae240-4a3a-4582-aa9b-13a0105615a4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400251edb0}] Aliases:map[]}"
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.103000128Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.10571274Z" level=info msg="Ran pod sandbox 0663c80b8341ef8aaa6b1b6d5ea68a5dcb6b1af6cea779ebe0cbf5f189961e32 with infra container: default/busybox/POD" id=01c8e45b-48b5-4d15-9dd5-b542b90e5e82 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.109979416Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=26529fd0-91d2-435f-afdb-0cbb4af711f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.110234156Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=26529fd0-91d2-435f-afdb-0cbb4af711f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.110285062Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=26529fd0-91d2-435f-afdb-0cbb4af711f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.11125465Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9eb6f6af-78b7-4d42-a6e3-4a16e9425f22 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:12:23 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:23.113737901Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.212237617Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=9eb6f6af-78b7-4d42-a6e3-4a16e9425f22 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.215371486Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b0007e7-86c8-4f32-ac89-ab04fddd17d5 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.218899452Z" level=info msg="Creating container: default/busybox/busybox" id=c89cf460-8bb9-47c4-a062-800af1423d1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.220112548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.231333166Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.232342239Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.266434573Z" level=info msg="Created container 1e9ece682a5e4fe2afd42a6edeab9b8f4776cc0fb559fabe8218b29aa27d3272: default/busybox/busybox" id=c89cf460-8bb9-47c4-a062-800af1423d1c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.270996146Z" level=info msg="Starting container: 1e9ece682a5e4fe2afd42a6edeab9b8f4776cc0fb559fabe8218b29aa27d3272" id=afb1f1fe-dab5-462b-b971-dff80307bd03 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:12:25 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:25.283198652Z" level=info msg="Started container" PID=2019 containerID=1e9ece682a5e4fe2afd42a6edeab9b8f4776cc0fb559fabe8218b29aa27d3272 description=default/busybox/busybox id=afb1f1fe-dab5-462b-b971-dff80307bd03 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0663c80b8341ef8aaa6b1b6d5ea68a5dcb6b1af6cea779ebe0cbf5f189961e32
	Oct 17 21:12:32 old-k8s-version-521710 crio[837]: time="2025-10-17T21:12:32.944080709Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	1e9ece682a5e4       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   9 seconds ago       Running             busybox                   0                   0663c80b8341e       busybox                                          default
	a2d31822a9ce4       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      15 seconds ago      Running             coredns                   0                   377e87d4b907a       coredns-5dd5756b68-vbl7d                         kube-system
	416590f474612       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      15 seconds ago      Running             storage-provisioner       0                   3a8a7121e7ef3       storage-provisioner                              kube-system
	c19e9fb2ec632       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    26 seconds ago      Running             kindnet-cni               0                   867fb5ce082e7       kindnet-w5t9r                                    kube-system
	555acde65648f       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      31 seconds ago      Running             kube-proxy                0                   c4af7aea54ffc       kube-proxy-dz7dm                                 kube-system
	4ff0e9c64a07e       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      55 seconds ago      Running             kube-controller-manager   0                   92e19a07a873b       kube-controller-manager-old-k8s-version-521710   kube-system
	02549a5d34ac9       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      55 seconds ago      Running             kube-apiserver            0                   29220156d23ec       kube-apiserver-old-k8s-version-521710            kube-system
	7223a709117ac       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      55 seconds ago      Running             kube-scheduler            0                   074441b557ef5       kube-scheduler-old-k8s-version-521710            kube-system
	9a7d5c7ad70ea       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      55 seconds ago      Running             etcd                      0                   da5f8c413b36f       etcd-old-k8s-version-521710                      kube-system
	
	
	==> coredns [a2d31822a9ce4a6170804202d80d777899a025b376100cf8ac30cdbce6f60a39] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39897 - 42092 "HINFO IN 1357201103640583668.5633647047653290995. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.02716332s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-521710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-521710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-521710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_11_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-521710
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:12:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:12:18 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:12:18 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:12:18 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:12:18 +0000   Fri, 17 Oct 2025 21:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-521710
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f23ef8d1-8109-4c2e-9a15-daa99b3bc5b9
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-5dd5756b68-vbl7d                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     32s
	  kube-system                 etcd-old-k8s-version-521710                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         46s
	  kube-system                 kindnet-w5t9r                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-old-k8s-version-521710             250m (12%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-controller-manager-old-k8s-version-521710    200m (10%)    0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 kube-proxy-dz7dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-old-k8s-version-521710             100m (5%)     0 (0%)      0 (0%)           0 (0%)         46s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 31s                kube-proxy       
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s                kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s                kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s                kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                node-controller  Node old-k8s-version-521710 event: Registered Node old-k8s-version-521710 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-521710 status is now: NodeReady
	
	
	==> dmesg <==
	[ +27.124680] overlayfs: idmapped layers are currently not supported
	[  +8.199606] overlayfs: idmapped layers are currently not supported
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [9a7d5c7ad70eaa8b5cd39fb6fb2bdff0976308e678e48a3ab86583807ffcad4a] <==
	{"level":"info","ts":"2025-10-17T21:11:39.00738Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T21:11:39.007121Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:11:39.009002Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:11:39.007939Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T21:11:39.375647Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-17T21:11:39.37578Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-17T21:11:39.375828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-10-17T21:11:39.375878Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-10-17T21:11:39.375916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T21:11:39.375965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-10-17T21:11:39.375996Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T21:11:39.377565Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:11:39.378828Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-521710 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T21:11:39.378901Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:11:39.380198Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T21:11:39.380467Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:11:39.380585Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:11:39.380662Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:11:39.38154Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:11:39.386891Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T21:11:39.381813Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T21:11:39.387043Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-17T21:12:01.98358Z","caller":"traceutil/trace.go:171","msg":"trace[341666260] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"101.63487ms","start":"2025-10-17T21:12:01.881922Z","end":"2025-10-17T21:12:01.983557Z","steps":["trace[341666260] 'process raft request'  (duration: 20.226642ms)","trace[341666260] 'compare'  (duration: 29.941828ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-17T21:12:02.136524Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.221814ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/deployment-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-17T21:12:02.136664Z","caller":"traceutil/trace.go:171","msg":"trace[1540637474] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/deployment-controller; range_end:; response_count:1; response_revision:372; }","duration":"118.377024ms","start":"2025-10-17T21:12:02.018275Z","end":"2025-10-17T21:12:02.136652Z","steps":["trace[1540637474] 'agreement among raft nodes before linearized reading'  (duration: 118.176759ms)"],"step_count":1}
	
	
	==> kernel <==
	 21:12:34 up  3:55,  0 user,  load average: 2.71, 3.52, 3.04
	Linux old-k8s-version-521710 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c19e9fb2ec6326f32016675391d5a0faf47f18d28778adc6392d631deb20d26f] <==
	I1017 21:12:07.933655       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:12:07.934122       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:12:07.934276       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:12:07.934317       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:12:07.934360       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:12:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:12:08.221133       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:12:08.221168       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:12:08.221178       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:12:08.221686       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 21:12:08.421325       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:12:08.421414       1 metrics.go:72] Registering metrics
	I1017 21:12:08.421552       1 controller.go:711] "Syncing nftables rules"
	I1017 21:12:18.228932       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:12:18.228989       1 main.go:301] handling current node
	I1017 21:12:28.221349       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:12:28.221387       1 main.go:301] handling current node
	
	
	==> kube-apiserver [02549a5d34ac9e2b9c11fb3431835335870e651444e84d64df9755c68b4a27a5] <==
	I1017 21:11:45.281269       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 21:11:45.281277       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:11:45.345520       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 21:11:45.352482       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 21:11:45.364416       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 21:11:45.383857       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 21:11:45.383884       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 21:11:45.383985       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E1017 21:11:45.461971       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1017 21:11:45.478762       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:11:45.884961       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 21:11:45.913392       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 21:11:45.913424       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:11:46.576375       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:11:46.635410       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:11:46.721777       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 21:11:46.728950       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 21:11:46.730026       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 21:11:46.737792       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:11:47.293340       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 21:11:48.389848       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 21:11:48.405606       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 21:11:48.419036       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1017 21:12:01.564679       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1017 21:12:02.213741       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ff0e9c64a07e907e7647ec5cdadde22a7e64bc74dfa3acd2b71365568f4e345] <==
	I1017 21:12:01.291163       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 21:12:01.305449       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 21:12:01.696338       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-dz7dm"
	I1017 21:12:01.696494       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:12:01.733287       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w5t9r"
	I1017 21:12:01.733330       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:12:01.733342       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 21:12:02.250496       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1017 21:12:02.336779       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-jjz8b"
	I1017 21:12:02.373741       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-vbl7d"
	I1017 21:12:02.426751       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="176.967994ms"
	I1017 21:12:02.455736       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.903234ms"
	I1017 21:12:02.455853       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.795µs"
	I1017 21:12:02.455913       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.207µs"
	I1017 21:12:04.339692       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1017 21:12:04.376860       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-jjz8b"
	I1017 21:12:04.427547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.039019ms"
	I1017 21:12:04.457556       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="29.953077ms"
	I1017 21:12:04.457684       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.618µs"
	I1017 21:12:18.896369       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="281.563µs"
	I1017 21:12:18.947771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="72.599µs"
	I1017 21:12:19.879760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="190.108µs"
	I1017 21:12:20.826259       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.507561ms"
	I1017 21:12:20.826340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.561µs"
	I1017 21:12:21.084820       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [555acde65648f56c5111761df0900dabe3490774a1b6a4836d8e442eaa737139] <==
	I1017 21:12:03.304620       1 server_others.go:69] "Using iptables proxy"
	I1017 21:12:03.386549       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 21:12:03.487261       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:12:03.488984       1 server_others.go:152] "Using iptables Proxier"
	I1017 21:12:03.489016       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 21:12:03.489024       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 21:12:03.489058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 21:12:03.489260       1 server.go:846] "Version info" version="v1.28.0"
	I1017 21:12:03.489269       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:12:03.492051       1 config.go:188] "Starting service config controller"
	I1017 21:12:03.492087       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 21:12:03.492108       1 config.go:97] "Starting endpoint slice config controller"
	I1017 21:12:03.492118       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 21:12:03.492686       1 config.go:315] "Starting node config controller"
	I1017 21:12:03.492699       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 21:12:03.592778       1 shared_informer.go:318] Caches are synced for node config
	I1017 21:12:03.592812       1 shared_informer.go:318] Caches are synced for service config
	I1017 21:12:03.592839       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [7223a709117aca1ab99390ccfae766d52b183839c032879fe762dea7b5e441cd] <==
	W1017 21:11:45.423898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1017 21:11:45.423913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1017 21:11:45.423978       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1017 21:11:45.423994       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1017 21:11:45.424053       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1017 21:11:45.424069       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1017 21:11:45.424141       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1017 21:11:45.424156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1017 21:11:45.424210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1017 21:11:45.424225       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1017 21:11:45.424283       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 21:11:45.424298       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 21:11:45.424345       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1017 21:11:45.424360       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1017 21:11:45.432590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1017 21:11:45.432632       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1017 21:11:45.433396       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1017 21:11:45.433427       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1017 21:11:45.433493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1017 21:11:45.433504       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1017 21:11:46.249955       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1017 21:11:46.250077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1017 21:11:46.320496       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1017 21:11:46.320532       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I1017 21:11:46.836633       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.744822    1382 topology_manager.go:215] "Topology Admit Handler" podUID="c06470bc-984f-4133-9b6b-9a07628779d6" podNamespace="kube-system" podName="kube-proxy-dz7dm"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.747574    1382 topology_manager.go:215] "Topology Admit Handler" podUID="1a59d731-1286-45cc-ba5b-6c62ec8d01bc" podNamespace="kube-system" podName="kindnet-w5t9r"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: W1017 21:12:01.791908    1382 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-521710" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-521710' and this object
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: E1017 21:12:01.791956    1382 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:old-k8s-version-521710" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'old-k8s-version-521710' and this object
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.834927    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1a59d731-1286-45cc-ba5b-6c62ec8d01bc-cni-cfg\") pod \"kindnet-w5t9r\" (UID: \"1a59d731-1286-45cc-ba5b-6c62ec8d01bc\") " pod="kube-system/kindnet-w5t9r"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.835657    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a59d731-1286-45cc-ba5b-6c62ec8d01bc-xtables-lock\") pod \"kindnet-w5t9r\" (UID: \"1a59d731-1286-45cc-ba5b-6c62ec8d01bc\") " pod="kube-system/kindnet-w5t9r"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.835844    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a59d731-1286-45cc-ba5b-6c62ec8d01bc-lib-modules\") pod \"kindnet-w5t9r\" (UID: \"1a59d731-1286-45cc-ba5b-6c62ec8d01bc\") " pod="kube-system/kindnet-w5t9r"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.835904    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c06470bc-984f-4133-9b6b-9a07628779d6-kube-proxy\") pod \"kube-proxy-dz7dm\" (UID: \"c06470bc-984f-4133-9b6b-9a07628779d6\") " pod="kube-system/kube-proxy-dz7dm"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.835942    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c06470bc-984f-4133-9b6b-9a07628779d6-xtables-lock\") pod \"kube-proxy-dz7dm\" (UID: \"c06470bc-984f-4133-9b6b-9a07628779d6\") " pod="kube-system/kube-proxy-dz7dm"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.836018    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbp2c\" (UniqueName: \"kubernetes.io/projected/c06470bc-984f-4133-9b6b-9a07628779d6-kube-api-access-jbp2c\") pod \"kube-proxy-dz7dm\" (UID: \"c06470bc-984f-4133-9b6b-9a07628779d6\") " pod="kube-system/kube-proxy-dz7dm"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.836091    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c06470bc-984f-4133-9b6b-9a07628779d6-lib-modules\") pod \"kube-proxy-dz7dm\" (UID: \"c06470bc-984f-4133-9b6b-9a07628779d6\") " pod="kube-system/kube-proxy-dz7dm"
	Oct 17 21:12:01 old-k8s-version-521710 kubelet[1382]: I1017 21:12:01.836149    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjt8n\" (UniqueName: \"kubernetes.io/projected/1a59d731-1286-45cc-ba5b-6c62ec8d01bc-kube-api-access-cjt8n\") pod \"kindnet-w5t9r\" (UID: \"1a59d731-1286-45cc-ba5b-6c62ec8d01bc\") " pod="kube-system/kindnet-w5t9r"
	Oct 17 21:12:03 old-k8s-version-521710 kubelet[1382]: I1017 21:12:03.702470    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-dz7dm" podStartSLOduration=2.70233778 podCreationTimestamp="2025-10-17 21:12:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:12:03.702158052 +0000 UTC m=+15.357359590" watchObservedRunningTime="2025-10-17 21:12:03.70233778 +0000 UTC m=+15.357539326"
	Oct 17 21:12:08 old-k8s-version-521710 kubelet[1382]: I1017 21:12:08.726343    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-w5t9r" podStartSLOduration=3.006317469 podCreationTimestamp="2025-10-17 21:12:01 +0000 UTC" firstStartedPulling="2025-10-17 21:12:03.056106549 +0000 UTC m=+14.711308087" lastFinishedPulling="2025-10-17 21:12:07.776086667 +0000 UTC m=+19.431288197" observedRunningTime="2025-10-17 21:12:08.725926038 +0000 UTC m=+20.381127568" watchObservedRunningTime="2025-10-17 21:12:08.726297579 +0000 UTC m=+20.381499117"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.764831    1382 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.884696    1382 topology_manager.go:215] "Topology Admit Handler" podUID="955afcc9-f2a7-4a58-aef7-bf782ee6e489" podNamespace="kube-system" podName="coredns-5dd5756b68-vbl7d"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.897984    1382 topology_manager.go:215] "Topology Admit Handler" podUID="66dcd538-5f45-4c68-99af-7376cbcaa0f4" podNamespace="kube-system" podName="storage-provisioner"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.912418    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/66dcd538-5f45-4c68-99af-7376cbcaa0f4-tmp\") pod \"storage-provisioner\" (UID: \"66dcd538-5f45-4c68-99af-7376cbcaa0f4\") " pod="kube-system/storage-provisioner"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.912492    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/955afcc9-f2a7-4a58-aef7-bf782ee6e489-config-volume\") pod \"coredns-5dd5756b68-vbl7d\" (UID: \"955afcc9-f2a7-4a58-aef7-bf782ee6e489\") " pod="kube-system/coredns-5dd5756b68-vbl7d"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.912529    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zjtt\" (UniqueName: \"kubernetes.io/projected/955afcc9-f2a7-4a58-aef7-bf782ee6e489-kube-api-access-2zjtt\") pod \"coredns-5dd5756b68-vbl7d\" (UID: \"955afcc9-f2a7-4a58-aef7-bf782ee6e489\") " pod="kube-system/coredns-5dd5756b68-vbl7d"
	Oct 17 21:12:18 old-k8s-version-521710 kubelet[1382]: I1017 21:12:18.913715    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x295l\" (UniqueName: \"kubernetes.io/projected/66dcd538-5f45-4c68-99af-7376cbcaa0f4-kube-api-access-x295l\") pod \"storage-provisioner\" (UID: \"66dcd538-5f45-4c68-99af-7376cbcaa0f4\") " pod="kube-system/storage-provisioner"
	Oct 17 21:12:19 old-k8s-version-521710 kubelet[1382]: I1017 21:12:19.901232    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vbl7d" podStartSLOduration=17.901105981 podCreationTimestamp="2025-10-17 21:12:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:12:19.876013845 +0000 UTC m=+31.531215399" watchObservedRunningTime="2025-10-17 21:12:19.901105981 +0000 UTC m=+31.556307519"
	Oct 17 21:12:19 old-k8s-version-521710 kubelet[1382]: I1017 21:12:19.902644    1382 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.902588767 podCreationTimestamp="2025-10-17 21:12:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:12:19.826329271 +0000 UTC m=+31.481530801" watchObservedRunningTime="2025-10-17 21:12:19.902588767 +0000 UTC m=+31.557790313"
	Oct 17 21:12:22 old-k8s-version-521710 kubelet[1382]: I1017 21:12:22.785623    1382 topology_manager.go:215] "Topology Admit Handler" podUID="67434d41-b1c0-448a-865e-0a81da0dde6b" podNamespace="default" podName="busybox"
	Oct 17 21:12:22 old-k8s-version-521710 kubelet[1382]: I1017 21:12:22.893291    1382 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzzfw\" (UniqueName: \"kubernetes.io/projected/67434d41-b1c0-448a-865e-0a81da0dde6b-kube-api-access-zzzfw\") pod \"busybox\" (UID: \"67434d41-b1c0-448a-865e-0a81da0dde6b\") " pod="default/busybox"
	
	
	==> storage-provisioner [416590f4746129fa6de6e6cdcf9ee21f7b99ed2cbd34258b3b8da961c0c0694f] <==
	I1017 21:12:19.489137       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:12:19.562469       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:12:19.562518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 21:12:19.590073       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:12:19.634087       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_e7d1f0b8-3122-4db1-888e-29b138a088df!
	I1017 21:12:19.702379       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2adb1101-5e3e-4bb4-b42e-5187960e23fd", APIVersion:"v1", ResourceVersion:"449", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-521710_e7d1f0b8-3122-4db1-888e-29b138a088df became leader
	I1017 21:12:19.851343       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_e7d1f0b8-3122-4db1-888e-29b138a088df!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-521710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (294.754875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:13:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-820018 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-820018 describe deploy/metrics-server -n kube-system: exit status 1 (88.953104ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-820018 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-820018
helpers_test.go:243: (dbg) docker inspect no-preload-820018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	        "Created": "2025-10-17T21:12:18.108117414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 806701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:12:18.200104167Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hostname",
	        "HostsPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hosts",
	        "LogPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589-json.log",
	        "Name": "/no-preload-820018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-820018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-820018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	                "LowerDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-820018",
	                "Source": "/var/lib/docker/volumes/no-preload-820018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-820018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-820018",
	                "name.minikube.sigs.k8s.io": "no-preload-820018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b819287dac2a816333fe8d928d6c9a6a0fb15951f7df41659a91f21e8825c953",
	            "SandboxKey": "/var/run/docker/netns/b819287dac2a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33827"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33829"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33833"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33831"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33832"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-820018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:0c:20:cc:41:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5060c6ac5e7d3e19dab985b5302ecd4b006296949593ffc066761654983bbcd9",
	                    "EndpointID": "dfd9df13dffe77e83d9dba6dff758ff39d600d72b073d67d90eb33c09acad63e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-820018",
	                        "9842fccb0456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-820018 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-820018 logs -n 25: (1.221642389s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-667721 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat docker --no-pager                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/docker/daemon.json                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo docker system info                                                                                                                                                                                                      │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:12:49
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:12:49.760056  809815 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:12:49.760212  809815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:12:49.760219  809815 out.go:374] Setting ErrFile to fd 2...
	I1017 21:12:49.760224  809815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:12:49.760474  809815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:12:49.760880  809815 out.go:368] Setting JSON to false
	I1017 21:12:49.761797  809815 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14116,"bootTime":1760721454,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:12:49.761854  809815 start.go:141] virtualization:  
	I1017 21:12:49.765511  809815 out.go:179] * [old-k8s-version-521710] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:12:49.769603  809815 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:12:49.769642  809815 notify.go:220] Checking for updates...
	I1017 21:12:49.775715  809815 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:12:49.778797  809815 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:12:49.781647  809815 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:12:49.784657  809815 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:12:49.787526  809815 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:12:49.790792  809815 config.go:182] Loaded profile config "old-k8s-version-521710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 21:12:49.794504  809815 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1017 21:12:49.797337  809815 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:12:49.848991  809815 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:12:49.849189  809815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:12:49.952081  809815 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-17 21:12:49.937960602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:12:49.952194  809815 docker.go:318] overlay module found
	I1017 21:12:49.955299  809815 out.go:179] * Using the docker driver based on existing profile
	I1017 21:12:49.958163  809815 start.go:305] selected driver: docker
	I1017 21:12:49.958184  809815 start.go:925] validating driver "docker" against &{Name:old-k8s-version-521710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-521710 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:12:49.958273  809815 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:12:49.959085  809815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:12:50.067629  809815 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:60 SystemTime:2025-10-17 21:12:50.057662847 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:12:50.067970  809815 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:12:50.068000  809815 cni.go:84] Creating CNI manager for ""
	I1017 21:12:50.068060  809815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:12:50.068095  809815 start.go:349] cluster config:
	{Name:old-k8s-version-521710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-521710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:12:50.071283  809815 out.go:179] * Starting "old-k8s-version-521710" primary control-plane node in "old-k8s-version-521710" cluster
	I1017 21:12:50.074192  809815 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:12:50.077177  809815 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:12:50.080150  809815 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 21:12:50.080219  809815 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 21:12:50.080238  809815 cache.go:58] Caching tarball of preloaded images
	I1017 21:12:50.080355  809815 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:12:50.080366  809815 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 21:12:50.080502  809815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/config.json ...
	I1017 21:12:50.080746  809815 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:12:50.108993  809815 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:12:50.109016  809815 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:12:50.109030  809815 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:12:50.109053  809815 start.go:360] acquireMachinesLock for old-k8s-version-521710: {Name:mk97d5029aecc5d88e89c1407e66ef4740184152 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:12:50.109107  809815 start.go:364] duration metric: took 36.374µs to acquireMachinesLock for "old-k8s-version-521710"
	I1017 21:12:50.109127  809815 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:12:50.109133  809815 fix.go:54] fixHost starting: 
	I1017 21:12:50.109401  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:12:50.141086  809815 fix.go:112] recreateIfNeeded on old-k8s-version-521710: state=Stopped err=<nil>
	W1017 21:12:50.141121  809815 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 21:12:47.309992  806397 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:12:48.893442  806397 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:12:49.071974  806397 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:12:49.072121  806397 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 21:12:49.526844  806397 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:12:50.247356  806397 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:12:50.807461  806397 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:12:51.048405  806397 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:12:51.412866  806397 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:12:51.419321  806397 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:12:51.419413  806397 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:12:51.423378  806397 out.go:252]   - Booting up control plane ...
	I1017 21:12:51.423496  806397 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:12:51.423585  806397 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:12:51.423661  806397 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:12:51.438823  806397 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:12:51.438947  806397 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:12:51.446779  806397 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:12:51.447192  806397 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:12:51.447443  806397 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:12:51.583030  806397 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:12:51.583213  806397 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:12:50.144449  809815 out.go:252] * Restarting existing docker container for "old-k8s-version-521710" ...
	I1017 21:12:50.144581  809815 cli_runner.go:164] Run: docker start old-k8s-version-521710
	I1017 21:12:50.430758  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:12:50.468730  809815 kic.go:430] container "old-k8s-version-521710" state is running.
	I1017 21:12:50.472265  809815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-521710
	I1017 21:12:50.511741  809815 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/config.json ...
	I1017 21:12:50.511977  809815 machine.go:93] provisionDockerMachine start ...
	I1017 21:12:50.512039  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:50.541693  809815 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:50.542017  809815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1017 21:12:50.542026  809815 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:12:50.542781  809815 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:12:53.719196  809815 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-521710
	
	I1017 21:12:53.719233  809815 ubuntu.go:182] provisioning hostname "old-k8s-version-521710"
	I1017 21:12:53.719295  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:53.751229  809815 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:53.751549  809815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1017 21:12:53.751567  809815 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-521710 && echo "old-k8s-version-521710" | sudo tee /etc/hostname
	I1017 21:12:53.945627  809815 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-521710
	
	I1017 21:12:53.945708  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:53.991380  809815 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:53.991695  809815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1017 21:12:53.991712  809815 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-521710' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-521710/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-521710' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:12:54.171452  809815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:12:54.171532  809815 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:12:54.171571  809815 ubuntu.go:190] setting up certificates
	I1017 21:12:54.171609  809815 provision.go:84] configureAuth start
	I1017 21:12:54.171743  809815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-521710
	I1017 21:12:54.203762  809815 provision.go:143] copyHostCerts
	I1017 21:12:54.203830  809815 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:12:54.203848  809815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:12:54.203925  809815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:12:54.204024  809815 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:12:54.204029  809815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:12:54.204055  809815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:12:54.204112  809815 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:12:54.204117  809815 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:12:54.204140  809815 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:12:54.204189  809815 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-521710 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-521710]
	I1017 21:12:52.583284  806397 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001550808s
	I1017 21:12:52.587848  806397 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:12:52.588175  806397 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1017 21:12:52.589937  806397 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:12:52.590045  806397 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:12:55.480645  809815 provision.go:177] copyRemoteCerts
	I1017 21:12:55.480745  809815 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:12:55.480810  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:55.498769  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:12:55.611353  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:12:55.637747  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1017 21:12:55.669124  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:12:55.691485  809815 provision.go:87] duration metric: took 1.519834696s to configureAuth
	I1017 21:12:55.691525  809815 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:12:55.691768  809815 config.go:182] Loaded profile config "old-k8s-version-521710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 21:12:55.691927  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:55.722501  809815 main.go:141] libmachine: Using SSH client type: native
	I1017 21:12:55.722833  809815 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33834 <nil> <nil>}
	I1017 21:12:55.722853  809815 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:12:56.118558  809815 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:12:56.118586  809815 machine.go:96] duration metric: took 5.606600031s to provisionDockerMachine
	I1017 21:12:56.118599  809815 start.go:293] postStartSetup for "old-k8s-version-521710" (driver="docker")
	I1017 21:12:56.118610  809815 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:12:56.118684  809815 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:12:56.118749  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:56.143143  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:12:56.259577  809815 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:12:56.267435  809815 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:12:56.267463  809815 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:12:56.267473  809815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:12:56.267532  809815 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:12:56.267615  809815 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:12:56.267716  809815 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:12:56.281195  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:12:56.309681  809815 start.go:296] duration metric: took 191.067602ms for postStartSetup
	I1017 21:12:56.309815  809815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:12:56.309880  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:56.339292  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:12:56.447949  809815 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:12:56.456230  809815 fix.go:56] duration metric: took 6.347089122s for fixHost
	I1017 21:12:56.456258  809815 start.go:83] releasing machines lock for "old-k8s-version-521710", held for 6.347142564s
	I1017 21:12:56.456333  809815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-521710
	I1017 21:12:56.502288  809815 ssh_runner.go:195] Run: cat /version.json
	I1017 21:12:56.502345  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:56.502615  809815 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:12:56.502667  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:12:56.537961  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:12:56.538915  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:12:56.659054  809815 ssh_runner.go:195] Run: systemctl --version
	I1017 21:12:56.779006  809815 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:12:56.859829  809815 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:12:56.864734  809815 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:12:56.864849  809815 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:12:56.881481  809815 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:12:56.881508  809815 start.go:495] detecting cgroup driver to use...
	I1017 21:12:56.881564  809815 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:12:56.881647  809815 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:12:56.907889  809815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:12:56.927763  809815 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:12:56.927851  809815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:12:56.953386  809815 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:12:56.981295  809815 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:12:57.214339  809815 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:12:57.407303  809815 docker.go:234] disabling docker service ...
	I1017 21:12:57.407419  809815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:12:57.428359  809815 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:12:57.449986  809815 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:12:57.662846  809815 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:12:57.885873  809815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:12:57.901545  809815 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:12:57.916793  809815 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1017 21:12:57.916868  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:57.928606  809815 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:12:57.928685  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:57.960570  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:57.973196  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:57.997920  809815 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:12:58.013895  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:58.028686  809815 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:58.044678  809815 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:12:58.062157  809815 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:12:58.077331  809815 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:12:58.089557  809815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:12:58.259008  809815 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:12:58.453671  809815 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:12:58.453777  809815 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:12:58.458144  809815 start.go:563] Will wait 60s for crictl version
	I1017 21:12:58.458222  809815 ssh_runner.go:195] Run: which crictl
	I1017 21:12:58.462164  809815 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:12:58.520117  809815 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:12:58.520233  809815 ssh_runner.go:195] Run: crio --version
	I1017 21:12:58.565790  809815 ssh_runner.go:195] Run: crio --version
	I1017 21:12:58.626854  809815 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1017 21:12:58.629834  809815 cli_runner.go:164] Run: docker network inspect old-k8s-version-521710 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:12:58.644761  809815 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:12:58.649056  809815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:12:58.659648  809815 kubeadm.go:883] updating cluster {Name:old-k8s-version-521710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-521710 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:12:58.659759  809815 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 21:12:58.659817  809815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:12:58.721284  809815 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:12:58.721304  809815 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:12:58.721357  809815 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:12:58.779525  809815 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:12:58.779613  809815 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:12:58.779636  809815 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1017 21:12:58.779781  809815 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-521710 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-521710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:12:58.779911  809815 ssh_runner.go:195] Run: crio config
	I1017 21:12:58.855558  809815 cni.go:84] Creating CNI manager for ""
	I1017 21:12:58.855634  809815 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:12:58.855674  809815 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:12:58.855728  809815 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-521710 NodeName:old-k8s-version-521710 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:12:58.855920  809815 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-521710"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:12:58.856040  809815 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1017 21:12:58.868301  809815 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:12:58.868445  809815 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:12:58.878935  809815 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1017 21:12:58.909273  809815 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:12:58.933378  809815 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1017 21:12:58.952853  809815 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:12:58.959751  809815 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:12:58.977760  809815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:12:59.148086  809815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:12:59.179738  809815 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710 for IP: 192.168.76.2
	I1017 21:12:59.179808  809815 certs.go:195] generating shared ca certs ...
	I1017 21:12:59.179840  809815 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:12:59.180050  809815 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:12:59.180140  809815 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:12:59.180167  809815 certs.go:257] generating profile certs ...
	I1017 21:12:59.180296  809815 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/client.key
	I1017 21:12:59.180409  809815 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/apiserver.key.f456b306
	I1017 21:12:59.180490  809815 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/proxy-client.key
	I1017 21:12:59.180653  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:12:59.180718  809815 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:12:59.180743  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:12:59.180803  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:12:59.180863  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:12:59.180923  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:12:59.181009  809815 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:12:59.181799  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:12:59.225135  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:12:59.266334  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:12:59.317034  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:12:59.370026  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1017 21:12:59.407746  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:12:59.440134  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:12:59.482905  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:12:59.517343  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:12:59.589962  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:12:59.621139  809815 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:12:59.646168  809815 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:12:59.672917  809815 ssh_runner.go:195] Run: openssl version
	I1017 21:12:59.684224  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:12:59.697605  809815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:12:59.702125  809815 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:12:59.702265  809815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:12:59.765366  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:12:59.774275  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:12:59.783340  809815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:12:59.787680  809815 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:12:59.787743  809815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:12:59.830255  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:12:59.839272  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:12:59.848557  809815 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:12:59.853052  809815 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:12:59.853171  809815 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:12:59.895253  809815 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:12:59.904181  809815 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:12:59.908649  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:12:59.950889  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:13:00.025335  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:13:00.116776  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:13:00.245982  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:13:00.346013  809815 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:13:00.543218  809815 kubeadm.go:400] StartCluster: {Name:old-k8s-version-521710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-521710 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:00.543371  809815 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:13:00.543534  809815 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:13:00.637424  809815 cri.go:89] found id: "33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45"
	I1017 21:13:00.637497  809815 cri.go:89] found id: "d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038"
	I1017 21:13:00.637516  809815 cri.go:89] found id: "458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a"
	I1017 21:13:00.637534  809815 cri.go:89] found id: "14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d"
	I1017 21:13:00.637553  809815 cri.go:89] found id: ""
	I1017 21:13:00.637661  809815 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:13:00.660865  809815 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:13:00Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:13:00.661030  809815 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:13:00.674929  809815 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:13:00.675008  809815 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:13:00.675138  809815 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:13:00.685944  809815 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:13:00.686342  809815 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-521710" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:00.686444  809815 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-521710" cluster setting kubeconfig missing "old-k8s-version-521710" context setting]
	I1017 21:13:00.686733  809815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:00.689254  809815 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:13:00.728464  809815 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 21:13:00.728549  809815 kubeadm.go:601] duration metric: took 53.519565ms to restartPrimaryControlPlane
	I1017 21:13:00.728573  809815 kubeadm.go:402] duration metric: took 185.375126ms to StartCluster
	I1017 21:13:00.728621  809815 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:00.728723  809815 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:00.729440  809815 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:00.729730  809815 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:13:00.730117  809815 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:13:00.730222  809815 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-521710"
	I1017 21:13:00.730255  809815 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-521710"
	W1017 21:13:00.730292  809815 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:13:00.730335  809815 host.go:66] Checking if "old-k8s-version-521710" exists ...
	I1017 21:13:00.731059  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:13:00.731457  809815 config.go:182] Loaded profile config "old-k8s-version-521710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 21:13:00.731573  809815 addons.go:69] Setting dashboard=true in profile "old-k8s-version-521710"
	I1017 21:13:00.731604  809815 addons.go:238] Setting addon dashboard=true in "old-k8s-version-521710"
	W1017 21:13:00.731640  809815 addons.go:247] addon dashboard should already be in state true
	I1017 21:13:00.731683  809815 host.go:66] Checking if "old-k8s-version-521710" exists ...
	I1017 21:13:00.732162  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:13:00.737322  809815 out.go:179] * Verifying Kubernetes components...
	I1017 21:13:00.737513  809815 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-521710"
	I1017 21:13:00.737825  809815 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-521710"
	I1017 21:13:00.738158  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:13:00.741507  809815 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:00.783177  809815 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:13:00.783232  809815 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:13:00.787697  809815 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:00.787719  809815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:13:00.787785  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:13:00.791249  809815 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 21:12:57.598179  806397 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 5.008226186s
	I1017 21:13:01.106714  806397 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 8.517166952s
	I1017 21:13:03.092190  806397 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 10.502452939s
	I1017 21:13:03.123474  806397 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:13:03.153837  806397 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:13:03.177073  806397 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:13:03.177506  806397 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-820018 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:13:03.194755  806397 kubeadm.go:318] [bootstrap-token] Using token: 3decqn.jlnjgmq9uek29kmb
	I1017 21:13:03.197590  806397 out.go:252]   - Configuring RBAC rules ...
	I1017 21:13:03.197716  806397 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:13:03.208327  806397 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:13:03.218491  806397 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:13:03.222463  806397 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:13:03.228768  806397 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:13:03.234135  806397 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:13:03.500199  806397 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:13:03.948760  806397 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:13:04.506870  806397 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:13:04.508457  806397 kubeadm.go:318] 
	I1017 21:13:04.508539  806397 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:13:04.508546  806397 kubeadm.go:318] 
	I1017 21:13:04.508626  806397 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:13:04.508630  806397 kubeadm.go:318] 
	I1017 21:13:04.508656  806397 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:13:04.509110  806397 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:13:04.509180  806397 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:13:04.509185  806397 kubeadm.go:318] 
	I1017 21:13:04.509241  806397 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:13:04.509246  806397 kubeadm.go:318] 
	I1017 21:13:04.509296  806397 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:13:04.509300  806397 kubeadm.go:318] 
	I1017 21:13:04.509354  806397 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:13:04.509433  806397 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:13:04.509504  806397 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:13:04.509509  806397 kubeadm.go:318] 
	I1017 21:13:04.509827  806397 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:13:04.509914  806397 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:13:04.509919  806397 kubeadm.go:318] 
	I1017 21:13:04.510220  806397 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3decqn.jlnjgmq9uek29kmb \
	I1017 21:13:04.510339  806397 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:13:04.510574  806397 kubeadm.go:318] 	--control-plane 
	I1017 21:13:04.510583  806397 kubeadm.go:318] 
	I1017 21:13:04.510896  806397 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:13:04.510907  806397 kubeadm.go:318] 
	I1017 21:13:04.511193  806397 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3decqn.jlnjgmq9uek29kmb \
	I1017 21:13:04.511508  806397 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:13:04.518289  806397 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:13:04.518653  806397 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:13:04.518832  806397 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 21:13:04.518870  806397 cni.go:84] Creating CNI manager for ""
	I1017 21:13:04.518909  806397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:13:04.522290  806397 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:13:00.796601  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:13:00.796624  809815 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:13:00.796685  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:13:00.796915  809815 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-521710"
	W1017 21:13:00.796939  809815 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:13:00.796964  809815 host.go:66] Checking if "old-k8s-version-521710" exists ...
	I1017 21:13:00.797365  809815 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:13:00.827024  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:13:00.861821  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:13:00.862582  809815 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:00.862598  809815 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:13:00.862676  809815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:13:00.888366  809815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:13:01.181087  809815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:01.281697  809815 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:01.296111  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:13:01.296176  809815 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:13:01.343547  809815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:01.485278  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:13:01.485366  809815 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:13:01.587917  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:13:01.587994  809815 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:13:01.667294  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:13:01.667367  809815 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:13:01.736409  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:13:01.736492  809815 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:13:01.782574  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:13:01.782655  809815 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:13:01.812125  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:13:01.812201  809815 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:13:01.843942  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:13:01.844026  809815 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:13:01.875735  809815 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:13:01.875812  809815 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:13:01.910262  809815 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:13:04.525146  806397 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:13:04.531736  806397 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:13:04.531754  806397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:13:04.586438  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:13:05.130586  806397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:13:05.130721  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:05.130795  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-820018 minikube.k8s.io/updated_at=2025_10_17T21_13_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=no-preload-820018 minikube.k8s.io/primary=true
	I1017 21:13:05.532800  806397 ops.go:34] apiserver oom_adj: -16
	I1017 21:13:05.532911  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:06.033419  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:06.533537  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:07.033981  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:07.532999  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:08.033496  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:08.533481  806397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:13:08.725653  806397 kubeadm.go:1113] duration metric: took 3.594985704s to wait for elevateKubeSystemPrivileges
	I1017 21:13:08.725694  806397 kubeadm.go:402] duration metric: took 24.539743192s to StartCluster
	I1017 21:13:08.725714  806397 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:08.725779  806397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:08.726718  806397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:08.726932  806397 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:13:08.727063  806397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:13:08.727349  806397 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:08.727381  806397 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:13:08.727440  806397 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820018"
	I1017 21:13:08.727453  806397 addons.go:238] Setting addon storage-provisioner=true in "no-preload-820018"
	I1017 21:13:08.727473  806397 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:08.727992  806397 addons.go:69] Setting default-storageclass=true in profile "no-preload-820018"
	I1017 21:13:08.728075  806397 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820018"
	I1017 21:13:08.728387  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:08.728770  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:08.730348  806397 out.go:179] * Verifying Kubernetes components...
	I1017 21:13:08.733492  806397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:08.767926  806397 addons.go:238] Setting addon default-storageclass=true in "no-preload-820018"
	I1017 21:13:08.767964  806397 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:08.769999  806397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:13:09.116292  809815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.935170636s)
	I1017 21:13:09.116360  809815 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (7.834589631s)
	I1017 21:13:09.116397  809815 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-521710" to be "Ready" ...
	I1017 21:13:09.116721  809815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.773100266s)
	I1017 21:13:09.196346  809815 node_ready.go:49] node "old-k8s-version-521710" is "Ready"
	I1017 21:13:09.196377  809815 node_ready.go:38] duration metric: took 79.952807ms for node "old-k8s-version-521710" to be "Ready" ...
	I1017 21:13:09.196390  809815 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:13:09.196459  809815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:13:10.255430  809815 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (8.345063319s)
	I1017 21:13:10.255593  809815 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.05911683s)
	I1017 21:13:10.255616  809815 api_server.go:72] duration metric: took 9.525826585s to wait for apiserver process to appear ...
	I1017 21:13:10.255630  809815 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:13:10.255652  809815 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:13:10.258540  809815 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-521710 addons enable metrics-server
	
	I1017 21:13:10.261738  809815 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1017 21:13:08.770358  806397 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:08.772930  806397 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:08.772951  806397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:13:08.773025  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:08.805667  806397 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:08.805694  806397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:13:08.805757  806397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:08.821278  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:08.846376  806397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33827 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:09.387908  806397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:09.390733  806397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:13:09.471538  806397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:09.495704  806397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:09.530280  806397 node_ready.go:35] waiting up to 6m0s for node "no-preload-820018" to be "Ready" ...
	I1017 21:13:10.565693  806397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.09411504s)
	I1017 21:13:10.565976  806397 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.1752084s)
	I1017 21:13:10.566000  806397 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 21:13:10.961880  806397 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.466131328s)
	I1017 21:13:10.965090  806397 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 21:13:10.968113  806397 addons.go:514] duration metric: took 2.240719123s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 21:13:11.069796  806397 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-820018" context rescaled to 1 replicas
	W1017 21:13:11.534940  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	I1017 21:13:10.264581  809815 addons.go:514] duration metric: took 9.534463553s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1017 21:13:10.315168  809815 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:13:10.319858  809815 api_server.go:141] control plane version: v1.28.0
	I1017 21:13:10.319892  809815 api_server.go:131] duration metric: took 64.250773ms to wait for apiserver health ...
	I1017 21:13:10.319901  809815 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:13:10.329029  809815 system_pods.go:59] 8 kube-system pods found
	I1017 21:13:10.329081  809815 system_pods.go:61] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:13:10.329093  809815 system_pods.go:61] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:13:10.329100  809815 system_pods.go:61] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:13:10.329113  809815 system_pods.go:61] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:13:10.329120  809815 system_pods.go:61] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:13:10.329130  809815 system_pods.go:61] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:13:10.329138  809815 system_pods.go:61] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:13:10.329159  809815 system_pods.go:61] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Running
	I1017 21:13:10.329166  809815 system_pods.go:74] duration metric: took 9.259426ms to wait for pod list to return data ...
	I1017 21:13:10.329179  809815 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:13:10.338052  809815 default_sa.go:45] found service account: "default"
	I1017 21:13:10.338093  809815 default_sa.go:55] duration metric: took 8.907881ms for default service account to be created ...
	I1017 21:13:10.338103  809815 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:13:10.352316  809815 system_pods.go:86] 8 kube-system pods found
	I1017 21:13:10.352359  809815 system_pods.go:89] "coredns-5dd5756b68-vbl7d" [955afcc9-f2a7-4a58-aef7-bf782ee6e489] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:13:10.352368  809815 system_pods.go:89] "etcd-old-k8s-version-521710" [6f2ccab5-5723-437b-b822-ed782d336f57] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:13:10.352375  809815 system_pods.go:89] "kindnet-w5t9r" [1a59d731-1286-45cc-ba5b-6c62ec8d01bc] Running
	I1017 21:13:10.352382  809815 system_pods.go:89] "kube-apiserver-old-k8s-version-521710" [8137a734-9f3c-4ae7-9b1b-cefc9a35a9a1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:13:10.352394  809815 system_pods.go:89] "kube-controller-manager-old-k8s-version-521710" [e1a50f99-6e96-4e59-8734-279929eca7b8] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:13:10.352405  809815 system_pods.go:89] "kube-proxy-dz7dm" [c06470bc-984f-4133-9b6b-9a07628779d6] Running
	I1017 21:13:10.352413  809815 system_pods.go:89] "kube-scheduler-old-k8s-version-521710" [bce9aad8-9075-4ba2-a4e9-d9ba8de0dd75] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:13:10.352431  809815 system_pods.go:89] "storage-provisioner" [66dcd538-5f45-4c68-99af-7376cbcaa0f4] Running
	I1017 21:13:10.352439  809815 system_pods.go:126] duration metric: took 14.329617ms to wait for k8s-apps to be running ...
	I1017 21:13:10.352451  809815 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:13:10.352520  809815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:13:10.382228  809815 system_svc.go:56] duration metric: took 29.76686ms WaitForService to wait for kubelet
	I1017 21:13:10.382268  809815 kubeadm.go:586] duration metric: took 9.652476823s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:13:10.382290  809815 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:13:10.388019  809815 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:13:10.388056  809815 node_conditions.go:123] node cpu capacity is 2
	I1017 21:13:10.388082  809815 node_conditions.go:105] duration metric: took 5.785557ms to run NodePressure ...
	I1017 21:13:10.388095  809815 start.go:241] waiting for startup goroutines ...
	I1017 21:13:10.388103  809815 start.go:246] waiting for cluster config update ...
	I1017 21:13:10.388114  809815 start.go:255] writing updated cluster config ...
	I1017 21:13:10.388418  809815 ssh_runner.go:195] Run: rm -f paused
	I1017 21:13:10.397646  809815 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:13:10.402854  809815 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-vbl7d" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 21:13:12.410796  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:14.033330  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	W1017 21:13:16.033892  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	W1017 21:13:14.908697  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:16.908867  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:19.409585  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:18.533139  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	W1017 21:13:21.032958  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	W1017 21:13:21.908321  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:23.912870  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:23.034283  806397 node_ready.go:57] node "no-preload-820018" has "Ready":"False" status (will retry)
	I1017 21:13:24.037657  806397 node_ready.go:49] node "no-preload-820018" is "Ready"
	I1017 21:13:24.037701  806397 node_ready.go:38] duration metric: took 14.507374454s for node "no-preload-820018" to be "Ready" ...
	I1017 21:13:24.037716  806397 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:13:24.037782  806397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:13:24.110805  806397 api_server.go:72] duration metric: took 15.383840364s to wait for apiserver process to appear ...
	I1017 21:13:24.110838  806397 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:13:24.110860  806397 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1017 21:13:24.128982  806397 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1017 21:13:24.130231  806397 api_server.go:141] control plane version: v1.34.1
	I1017 21:13:24.130268  806397 api_server.go:131] duration metric: took 19.422561ms to wait for apiserver health ...
	I1017 21:13:24.130278  806397 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:13:24.145136  806397 system_pods.go:59] 8 kube-system pods found
	I1017 21:13:24.145197  806397 system_pods.go:61] "coredns-66bc5c9577-zr7ck" [809b8090-79d8-451e-8bbd-01dd5e872065] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:13:24.145207  806397 system_pods.go:61] "etcd-no-preload-820018" [92cb2f75-23dc-4a77-9a3e-766c22130b67] Running
	I1017 21:13:24.145214  806397 system_pods.go:61] "kindnet-s9bz8" [a908966a-493f-47d0-9a61-648348974b74] Running
	I1017 21:13:24.145226  806397 system_pods.go:61] "kube-apiserver-no-preload-820018" [ae1af5a5-a5fb-474e-8ae4-9853472cfae6] Running
	I1017 21:13:24.145231  806397 system_pods.go:61] "kube-controller-manager-no-preload-820018" [87837df3-d073-49c8-a4ed-e1389ee0f615] Running
	I1017 21:13:24.145236  806397 system_pods.go:61] "kube-proxy-qkvkh" [0437b02c-f1d6-41ed-806e-0a6bf6ddea3f] Running
	I1017 21:13:24.145246  806397 system_pods.go:61] "kube-scheduler-no-preload-820018" [e1097625-88f8-4377-ba13-eadd27e78706] Running
	I1017 21:13:24.145252  806397 system_pods.go:61] "storage-provisioner" [acc08441-b365-4e32-bc0e-f4a0a54e841e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:13:24.145270  806397 system_pods.go:74] duration metric: took 14.985831ms to wait for pod list to return data ...
	I1017 21:13:24.145280  806397 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:13:24.165277  806397 default_sa.go:45] found service account: "default"
	I1017 21:13:24.165306  806397 default_sa.go:55] duration metric: took 20.01905ms for default service account to be created ...
	I1017 21:13:24.165325  806397 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:13:24.237724  806397 system_pods.go:86] 8 kube-system pods found
	I1017 21:13:24.237762  806397 system_pods.go:89] "coredns-66bc5c9577-zr7ck" [809b8090-79d8-451e-8bbd-01dd5e872065] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:13:24.237770  806397 system_pods.go:89] "etcd-no-preload-820018" [92cb2f75-23dc-4a77-9a3e-766c22130b67] Running
	I1017 21:13:24.237777  806397 system_pods.go:89] "kindnet-s9bz8" [a908966a-493f-47d0-9a61-648348974b74] Running
	I1017 21:13:24.237781  806397 system_pods.go:89] "kube-apiserver-no-preload-820018" [ae1af5a5-a5fb-474e-8ae4-9853472cfae6] Running
	I1017 21:13:24.237787  806397 system_pods.go:89] "kube-controller-manager-no-preload-820018" [87837df3-d073-49c8-a4ed-e1389ee0f615] Running
	I1017 21:13:24.237791  806397 system_pods.go:89] "kube-proxy-qkvkh" [0437b02c-f1d6-41ed-806e-0a6bf6ddea3f] Running
	I1017 21:13:24.237795  806397 system_pods.go:89] "kube-scheduler-no-preload-820018" [e1097625-88f8-4377-ba13-eadd27e78706] Running
	I1017 21:13:24.237806  806397 system_pods.go:89] "storage-provisioner" [acc08441-b365-4e32-bc0e-f4a0a54e841e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:13:24.237824  806397 retry.go:31] will retry after 242.609157ms: missing components: kube-dns
	I1017 21:13:24.525479  806397 system_pods.go:86] 8 kube-system pods found
	I1017 21:13:24.525520  806397 system_pods.go:89] "coredns-66bc5c9577-zr7ck" [809b8090-79d8-451e-8bbd-01dd5e872065] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:13:24.525528  806397 system_pods.go:89] "etcd-no-preload-820018" [92cb2f75-23dc-4a77-9a3e-766c22130b67] Running
	I1017 21:13:24.525534  806397 system_pods.go:89] "kindnet-s9bz8" [a908966a-493f-47d0-9a61-648348974b74] Running
	I1017 21:13:24.525538  806397 system_pods.go:89] "kube-apiserver-no-preload-820018" [ae1af5a5-a5fb-474e-8ae4-9853472cfae6] Running
	I1017 21:13:24.525543  806397 system_pods.go:89] "kube-controller-manager-no-preload-820018" [87837df3-d073-49c8-a4ed-e1389ee0f615] Running
	I1017 21:13:24.525549  806397 system_pods.go:89] "kube-proxy-qkvkh" [0437b02c-f1d6-41ed-806e-0a6bf6ddea3f] Running
	I1017 21:13:24.525553  806397 system_pods.go:89] "kube-scheduler-no-preload-820018" [e1097625-88f8-4377-ba13-eadd27e78706] Running
	I1017 21:13:24.525558  806397 system_pods.go:89] "storage-provisioner" [acc08441-b365-4e32-bc0e-f4a0a54e841e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:13:24.525568  806397 system_pods.go:126] duration metric: took 360.236294ms to wait for k8s-apps to be running ...
	I1017 21:13:24.525577  806397 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:13:24.525635  806397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:13:24.557402  806397 system_svc.go:56] duration metric: took 31.814912ms WaitForService to wait for kubelet
	I1017 21:13:24.557442  806397 kubeadm.go:586] duration metric: took 15.830485727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:13:24.557462  806397 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:13:24.576302  806397 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:13:24.576351  806397 node_conditions.go:123] node cpu capacity is 2
	I1017 21:13:24.576364  806397 node_conditions.go:105] duration metric: took 18.896983ms to run NodePressure ...
	I1017 21:13:24.576377  806397 start.go:241] waiting for startup goroutines ...
	I1017 21:13:24.576386  806397 start.go:246] waiting for cluster config update ...
	I1017 21:13:24.576397  806397 start.go:255] writing updated cluster config ...
	I1017 21:13:24.576694  806397 ssh_runner.go:195] Run: rm -f paused
	I1017 21:13:24.581059  806397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:13:24.589852  806397 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zr7ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.597838  806397 pod_ready.go:94] pod "coredns-66bc5c9577-zr7ck" is "Ready"
	I1017 21:13:25.597869  806397 pod_ready.go:86] duration metric: took 1.007977583s for pod "coredns-66bc5c9577-zr7ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.601043  806397 pod_ready.go:83] waiting for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.606201  806397 pod_ready.go:94] pod "etcd-no-preload-820018" is "Ready"
	I1017 21:13:25.606228  806397 pod_ready.go:86] duration metric: took 5.158923ms for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.608848  806397 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.615963  806397 pod_ready.go:94] pod "kube-apiserver-no-preload-820018" is "Ready"
	I1017 21:13:25.615991  806397 pod_ready.go:86] duration metric: took 7.116792ms for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.618869  806397 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.793755  806397 pod_ready.go:94] pod "kube-controller-manager-no-preload-820018" is "Ready"
	I1017 21:13:25.793779  806397 pod_ready.go:86] duration metric: took 174.888918ms for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:25.993590  806397 pod_ready.go:83] waiting for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:26.393541  806397 pod_ready.go:94] pod "kube-proxy-qkvkh" is "Ready"
	I1017 21:13:26.393570  806397 pod_ready.go:86] duration metric: took 399.88206ms for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:26.594332  806397 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:26.993864  806397 pod_ready.go:94] pod "kube-scheduler-no-preload-820018" is "Ready"
	I1017 21:13:26.993895  806397 pod_ready.go:86] duration metric: took 399.532213ms for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:13:26.993909  806397 pod_ready.go:40] duration metric: took 2.41281352s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:13:27.089004  806397 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:13:27.092526  806397 out.go:179] * Done! kubectl is now configured to use "no-preload-820018" cluster and "default" namespace by default
	W1017 21:13:26.409779  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:28.910268  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:30.911761  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	W1017 21:13:33.409799  809815 pod_ready.go:104] pod "coredns-5dd5756b68-vbl7d" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 21:13:24 no-preload-820018 crio[837]: time="2025-10-17T21:13:24.152732828Z" level=info msg="Created container 831327a095765ba3a825d4dec3267af1f80cb4d6df1b111e35d8ddcb6cc761c5: kube-system/coredns-66bc5c9577-zr7ck/coredns" id=7d4dfb0a-c46c-47c4-9d7a-81e216dc29f7 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:13:24 no-preload-820018 crio[837]: time="2025-10-17T21:13:24.154575183Z" level=info msg="Starting container: 831327a095765ba3a825d4dec3267af1f80cb4d6df1b111e35d8ddcb6cc761c5" id=a821cd08-2380-4c4a-b43d-e87955ff0a90 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:13:24 no-preload-820018 crio[837]: time="2025-10-17T21:13:24.160739929Z" level=info msg="Started container" PID=2509 containerID=831327a095765ba3a825d4dec3267af1f80cb4d6df1b111e35d8ddcb6cc761c5 description=kube-system/coredns-66bc5c9577-zr7ck/coredns id=a821cd08-2380-4c4a-b43d-e87955ff0a90 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b8df220fd68a7f4a07a8fec6054d1fe195cfc7ac5093a8dd3eec754c4da13406
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.703914537Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9fb8ad53-b0c2-46f8-9014-232759ba6c96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.703990945Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.713433044Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a UID:0ef24a65-39ad-473e-95c1-3c893463f1c4 NetNS:/var/run/netns/3abb87bf-8dad-437a-9fef-181db1747cc8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002714b48}] Aliases:map[]}"
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.713476893Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.726591847Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a UID:0ef24a65-39ad-473e-95c1-3c893463f1c4 NetNS:/var/run/netns/3abb87bf-8dad-437a-9fef-181db1747cc8 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4002714b48}] Aliases:map[]}"
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.727874441Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.736085541Z" level=info msg="Ran pod sandbox 6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a with infra container: default/busybox/POD" id=9fb8ad53-b0c2-46f8-9014-232759ba6c96 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.737713986Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f50e97b6-bd8e-4b33-b717-6f4fabdeafbb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.737933378Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=f50e97b6-bd8e-4b33-b717-6f4fabdeafbb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.738042491Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=f50e97b6-bd8e-4b33-b717-6f4fabdeafbb name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.740428912Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f450f2db-e92f-4149-b6c0-9c555492043b name=/runtime.v1.ImageService/PullImage
	Oct 17 21:13:27 no-preload-820018 crio[837]: time="2025-10-17T21:13:27.745784218Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.86364013Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=f450f2db-e92f-4149-b6c0-9c555492043b name=/runtime.v1.ImageService/PullImage
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.864589772Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99fdddff-c431-4428-af00-393996c12118 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.866233889Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76e52035-fa65-4bfa-8acb-ee532fec3285 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.875083693Z" level=info msg="Creating container: default/busybox/busybox" id=4fe4ca22-b6a0-4d4d-83d3-567069881d4c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.876235298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.887815881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.888338554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.915499733Z" level=info msg="Created container 7707d8ed152335a76b696dae0d8644e2ff3ad9f09d0cc0a59a191f583e6e8d44: default/busybox/busybox" id=4fe4ca22-b6a0-4d4d-83d3-567069881d4c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.918239348Z" level=info msg="Starting container: 7707d8ed152335a76b696dae0d8644e2ff3ad9f09d0cc0a59a191f583e6e8d44" id=32154e3c-ec28-4883-b8b5-87547209a63f name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:13:29 no-preload-820018 crio[837]: time="2025-10-17T21:13:29.926580985Z" level=info msg="Started container" PID=2563 containerID=7707d8ed152335a76b696dae0d8644e2ff3ad9f09d0cc0a59a191f583e6e8d44 description=default/busybox/busybox id=32154e3c-ec28-4883-b8b5-87547209a63f name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7707d8ed15233       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   6c0cd17dd77f7       busybox                                     default
	831327a095765       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      13 seconds ago      Running             coredns                   0                   b8df220fd68a7       coredns-66bc5c9577-zr7ck                    kube-system
	83702c4bbb5df       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      13 seconds ago      Running             storage-provisioner       0                   42dad560b8343       storage-provisioner                         kube-system
	01641582f8318       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   f3795483a0357       kindnet-s9bz8                               kube-system
	e4da2d5642f27       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      27 seconds ago      Running             kube-proxy                0                   73cbf56561b93       kube-proxy-qkvkh                            kube-system
	93ff9cb781f56       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      44 seconds ago      Running             kube-apiserver            0                   f41316c4e2b1d       kube-apiserver-no-preload-820018            kube-system
	252a74459fb86       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      44 seconds ago      Running             kube-controller-manager   0                   1c906240b5cbc       kube-controller-manager-no-preload-820018   kube-system
	a30551791c94e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      44 seconds ago      Running             etcd                      0                   5331a8cc74723       etcd-no-preload-820018                      kube-system
	b90243a24e34d       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      44 seconds ago      Running             kube-scheduler            0                   f13fc0c256e5a       kube-scheduler-no-preload-820018            kube-system
	
	
	==> coredns [831327a095765ba3a825d4dec3267af1f80cb4d6df1b111e35d8ddcb6cc761c5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41772 - 53978 "HINFO IN 1282116900785135873.7678005978027420858. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023696121s
	
	
	==> describe nodes <==
	Name:               no-preload-820018
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-820018
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-820018
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:13:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820018
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:13:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:13:34 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:13:34 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:13:34 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:13:34 +0000   Fri, 17 Oct 2025 21:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-820018
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                54655725-7d36-48a4-9452-fd60671cfec5
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-zr7ck                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-820018                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-s9bz8                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-820018             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-820018    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-qkvkh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-820018             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 27s                kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-820018 event: Registered Node no-preload-820018 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-820018 status is now: NodeReady
	
	
	==> dmesg <==
	[  +8.199606] overlayfs: idmapped layers are currently not supported
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [a30551791c94ec6baba47363094818ce40ddedfed56c27e32c964f5262959f79] <==
	{"level":"warn","ts":"2025-10-17T21:12:58.292269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.319733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.334400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.355091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.375804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.395019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.416087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.440461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.446373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.487174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.504704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.526435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.550235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.568490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.581856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.605000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.616740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.643976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.674018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.689535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.747213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.766939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.780611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.833639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:12:58.924415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:13:38 up  3:56,  0 user,  load average: 4.06, 3.87, 3.20
	Linux no-preload-820018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [01641582f831867109ff3de686f44655525d08c2b8a2656eb2913809c4206b44] <==
	I1017 21:13:13.027778       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:13:13.028545       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:13:13.028690       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:13:13.028706       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:13:13.028721       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:13:13Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:13:13.319545       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:13:13.319576       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:13:13.319591       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:13:13.320447       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1017 21:13:13.519736       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:13:13.519842       1 metrics.go:72] Registering metrics
	I1017 21:13:13.519943       1 controller.go:711] "Syncing nftables rules"
	I1017 21:13:23.235875       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:13:23.235986       1 main.go:301] handling current node
	I1017 21:13:33.230439       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:13:33.230500       1 main.go:301] handling current node
	
	
	==> kube-apiserver [93ff9cb781f5605b586859e09eb7249c363f5ffd57e01874dd377db62f375520] <==
	I1017 21:13:00.920615       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:13:00.925660       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:13:01.021178       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:13:01.021332       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 21:13:01.061967       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:13:01.092792       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:13:01.113099       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:13:01.201757       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 21:13:01.231408       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 21:13:01.231714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:13:02.648174       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:13:02.726847       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:13:02.817849       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 21:13:02.827847       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 21:13:02.828989       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:13:02.835823       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:13:02.953118       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:13:03.920530       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:13:03.943079       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 21:13:03.962271       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 21:13:08.318104       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:13:08.390435       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:13:08.828051       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:13:09.056343       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1017 21:13:36.519665       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:45860: use of closed network connection
	
	
	==> kube-controller-manager [252a74459fb8608fd74d4c57b1408e06d72beb564537cac6d62f0e6bb38641d6] <==
	I1017 21:13:08.103023       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:13:08.103632       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 21:13:08.105096       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:13:08.105113       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:13:08.105118       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:13:08.110386       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:13:08.110857       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:13:08.119895       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:13:08.120079       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:13:08.130698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:13:08.148041       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:13:08.150245       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:13:08.150722       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:13:08.150963       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 21:13:08.159326       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:13:08.163750       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:13:08.163866       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 21:13:08.164844       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 21:13:08.164907       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 21:13:08.164937       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 21:13:08.164979       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:13:08.170510       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 21:13:08.170773       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:13:08.228671       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-820018" podCIDRs=["10.244.0.0/24"]
	I1017 21:13:28.088760       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e4da2d5642f27a3a58f9f867616fb244f304933c77c6bb42eb454870fe3a796e] <==
	I1017 21:13:10.387308       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:13:10.493971       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:13:10.594830       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:13:10.594863       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:13:10.594942       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:13:10.744649       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:13:10.744718       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:13:10.749531       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:13:10.749998       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:13:10.750011       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:13:10.754771       1 config.go:200] "Starting service config controller"
	I1017 21:13:10.754793       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:13:10.754809       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:13:10.754813       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:13:10.754825       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:13:10.754829       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:13:10.755551       1 config.go:309] "Starting node config controller"
	I1017 21:13:10.755560       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:13:10.755567       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:13:10.858584       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:13:10.858642       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 21:13:10.858916       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b90243a24e34d77610fefb100ca44263156be514b716497ba9dc3b69f1f77931] <==
	E1017 21:13:01.093576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:13:01.093669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 21:13:01.093780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 21:13:01.093866       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:13:01.093958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 21:13:01.094051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 21:13:01.094136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 21:13:01.094269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:13:01.094815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:13:01.094951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 21:13:01.095037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 21:13:01.095279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:13:01.105414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 21:13:01.105500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:13:01.105521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:13:01.105537       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:13:01.930124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:13:02.022074       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 21:13:02.053523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:13:02.053695       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:13:02.109419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:13:02.117853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:13:02.139406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:13:02.184049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 21:13:04.146573       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:13:08 no-preload-820018 kubelet[2028]: I1017 21:13:08.298072    2028 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468795    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0437b02c-f1d6-41ed-806e-0a6bf6ddea3f-xtables-lock\") pod \"kube-proxy-qkvkh\" (UID: \"0437b02c-f1d6-41ed-806e-0a6bf6ddea3f\") " pod="kube-system/kube-proxy-qkvkh"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468857    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a908966a-493f-47d0-9a61-648348974b74-xtables-lock\") pod \"kindnet-s9bz8\" (UID: \"a908966a-493f-47d0-9a61-648348974b74\") " pod="kube-system/kindnet-s9bz8"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468883    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0437b02c-f1d6-41ed-806e-0a6bf6ddea3f-lib-modules\") pod \"kube-proxy-qkvkh\" (UID: \"0437b02c-f1d6-41ed-806e-0a6bf6ddea3f\") " pod="kube-system/kube-proxy-qkvkh"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468905    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb84b\" (UniqueName: \"kubernetes.io/projected/0437b02c-f1d6-41ed-806e-0a6bf6ddea3f-kube-api-access-hb84b\") pod \"kube-proxy-qkvkh\" (UID: \"0437b02c-f1d6-41ed-806e-0a6bf6ddea3f\") " pod="kube-system/kube-proxy-qkvkh"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468938    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkc5q\" (UniqueName: \"kubernetes.io/projected/a908966a-493f-47d0-9a61-648348974b74-kube-api-access-lkc5q\") pod \"kindnet-s9bz8\" (UID: \"a908966a-493f-47d0-9a61-648348974b74\") " pod="kube-system/kindnet-s9bz8"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468957    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0437b02c-f1d6-41ed-806e-0a6bf6ddea3f-kube-proxy\") pod \"kube-proxy-qkvkh\" (UID: \"0437b02c-f1d6-41ed-806e-0a6bf6ddea3f\") " pod="kube-system/kube-proxy-qkvkh"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468972    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a908966a-493f-47d0-9a61-648348974b74-cni-cfg\") pod \"kindnet-s9bz8\" (UID: \"a908966a-493f-47d0-9a61-648348974b74\") " pod="kube-system/kindnet-s9bz8"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.468989    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a908966a-493f-47d0-9a61-648348974b74-lib-modules\") pod \"kindnet-s9bz8\" (UID: \"a908966a-493f-47d0-9a61-648348974b74\") " pod="kube-system/kindnet-s9bz8"
	Oct 17 21:13:09 no-preload-820018 kubelet[2028]: I1017 21:13:09.719454    2028 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:13:10 no-preload-820018 kubelet[2028]: W1017 21:13:10.020718    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-73cbf56561b93d1d0f987ca3323769d50a5cdba744a956eb864a2d09f505d4d9 WatchSource:0}: Error finding container 73cbf56561b93d1d0f987ca3323769d50a5cdba744a956eb864a2d09f505d4d9: Status 404 returned error can't find the container with id 73cbf56561b93d1d0f987ca3323769d50a5cdba744a956eb864a2d09f505d4d9
	Oct 17 21:13:10 no-preload-820018 kubelet[2028]: W1017 21:13:10.043439    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-f3795483a0357ea9191d542f2b4da3f0634b739518419a673406c3ffba1af7bc WatchSource:0}: Error finding container f3795483a0357ea9191d542f2b4da3f0634b739518419a673406c3ffba1af7bc: Status 404 returned error can't find the container with id f3795483a0357ea9191d542f2b4da3f0634b739518419a673406c3ffba1af7bc
	Oct 17 21:13:10 no-preload-820018 kubelet[2028]: I1017 21:13:10.432677    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qkvkh" podStartSLOduration=1.43265896 podStartE2EDuration="1.43265896s" podCreationTimestamp="2025-10-17 21:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:13:10.432492369 +0000 UTC m=+6.586768491" watchObservedRunningTime="2025-10-17 21:13:10.43265896 +0000 UTC m=+6.586935073"
	Oct 17 21:13:14 no-preload-820018 kubelet[2028]: I1017 21:13:14.232416    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-s9bz8" podStartSLOduration=2.397323355 podStartE2EDuration="5.232394697s" podCreationTimestamp="2025-10-17 21:13:09 +0000 UTC" firstStartedPulling="2025-10-17 21:13:10.082770509 +0000 UTC m=+6.237046623" lastFinishedPulling="2025-10-17 21:13:12.917841843 +0000 UTC m=+9.072117965" observedRunningTime="2025-10-17 21:13:13.415256825 +0000 UTC m=+9.569532939" watchObservedRunningTime="2025-10-17 21:13:14.232394697 +0000 UTC m=+10.386670810"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: I1017 21:13:23.568964    2028 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: I1017 21:13:23.780542    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/809b8090-79d8-451e-8bbd-01dd5e872065-config-volume\") pod \"coredns-66bc5c9577-zr7ck\" (UID: \"809b8090-79d8-451e-8bbd-01dd5e872065\") " pod="kube-system/coredns-66bc5c9577-zr7ck"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: I1017 21:13:23.780597    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/acc08441-b365-4e32-bc0e-f4a0a54e841e-tmp\") pod \"storage-provisioner\" (UID: \"acc08441-b365-4e32-bc0e-f4a0a54e841e\") " pod="kube-system/storage-provisioner"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: I1017 21:13:23.780622    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66bhk\" (UniqueName: \"kubernetes.io/projected/acc08441-b365-4e32-bc0e-f4a0a54e841e-kube-api-access-66bhk\") pod \"storage-provisioner\" (UID: \"acc08441-b365-4e32-bc0e-f4a0a54e841e\") " pod="kube-system/storage-provisioner"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: I1017 21:13:23.780650    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5fcb\" (UniqueName: \"kubernetes.io/projected/809b8090-79d8-451e-8bbd-01dd5e872065-kube-api-access-q5fcb\") pod \"coredns-66bc5c9577-zr7ck\" (UID: \"809b8090-79d8-451e-8bbd-01dd5e872065\") " pod="kube-system/coredns-66bc5c9577-zr7ck"
	Oct 17 21:13:23 no-preload-820018 kubelet[2028]: W1017 21:13:23.948088    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-42dad560b83436b1f0c639154b93d19a253624a867554ddaef804a86efe495a7 WatchSource:0}: Error finding container 42dad560b83436b1f0c639154b93d19a253624a867554ddaef804a86efe495a7: Status 404 returned error can't find the container with id 42dad560b83436b1f0c639154b93d19a253624a867554ddaef804a86efe495a7
	Oct 17 21:13:24 no-preload-820018 kubelet[2028]: W1017 21:13:24.009704    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-b8df220fd68a7f4a07a8fec6054d1fe195cfc7ac5093a8dd3eec754c4da13406 WatchSource:0}: Error finding container b8df220fd68a7f4a07a8fec6054d1fe195cfc7ac5093a8dd3eec754c4da13406: Status 404 returned error can't find the container with id b8df220fd68a7f4a07a8fec6054d1fe195cfc7ac5093a8dd3eec754c4da13406
	Oct 17 21:13:24 no-preload-820018 kubelet[2028]: I1017 21:13:24.545931    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zr7ck" podStartSLOduration=15.545911902 podStartE2EDuration="15.545911902s" podCreationTimestamp="2025-10-17 21:13:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:13:24.455542861 +0000 UTC m=+20.609818984" watchObservedRunningTime="2025-10-17 21:13:24.545911902 +0000 UTC m=+20.700188041"
	Oct 17 21:13:25 no-preload-820018 kubelet[2028]: I1017 21:13:25.443465    2028 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.443441283 podStartE2EDuration="15.443441283s" podCreationTimestamp="2025-10-17 21:13:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:13:24.549698908 +0000 UTC m=+20.703975030" watchObservedRunningTime="2025-10-17 21:13:25.443441283 +0000 UTC m=+21.597717397"
	Oct 17 21:13:27 no-preload-820018 kubelet[2028]: I1017 21:13:27.509333    2028 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt7wr\" (UniqueName: \"kubernetes.io/projected/0ef24a65-39ad-473e-95c1-3c893463f1c4-kube-api-access-bt7wr\") pod \"busybox\" (UID: \"0ef24a65-39ad-473e-95c1-3c893463f1c4\") " pod="default/busybox"
	Oct 17 21:13:27 no-preload-820018 kubelet[2028]: W1017 21:13:27.734402    2028 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a WatchSource:0}: Error finding container 6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a: Status 404 returned error can't find the container with id 6c0cd17dd77f79eceec76e56065797b7ccce1c10a31078480c76d89fa798960a
	
	
	==> storage-provisioner [83702c4bbb5df5bd97014bf1e0da666c146b6889344db684825055d236108675] <==
	I1017 21:13:24.058710       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:13:24.092734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:13:24.093406       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:13:24.108191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:24.148056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:13:24.148232       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:13:24.148435       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820018_3afebc68-7d96-4dd5-9629-1c8873b1ef4a!
	I1017 21:13:24.152108       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29c5c6d-086f-48e5-9bd2-362e2a3b2aa8", APIVersion:"v1", ResourceVersion:"452", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820018_3afebc68-7d96-4dd5-9629-1c8873b1ef4a became leader
	W1017 21:13:24.192283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:24.200283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:13:24.249901       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820018_3afebc68-7d96-4dd5-9629-1c8873b1ef4a!
	W1017 21:13:26.204234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:26.209303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:28.213107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:28.221030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:30.225552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:30.231338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:32.234972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:32.241670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:34.246242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:34.251182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:36.254565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:36.258911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:38.262062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:13:38.266680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820018 -n no-preload-820018
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-820018 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (9.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-521710 --alsologtostderr -v=1
E1017 21:14:01.312572  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-521710 --alsologtostderr -v=1: exit status 80 (2.72747062s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-521710 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 21:13:59.900598  814992 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:13:59.901458  814992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:59.901481  814992 out.go:374] Setting ErrFile to fd 2...
	I1017 21:13:59.901486  814992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:59.901776  814992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:13:59.902069  814992 out.go:368] Setting JSON to false
	I1017 21:13:59.902095  814992 mustload.go:65] Loading cluster: old-k8s-version-521710
	I1017 21:13:59.902500  814992 config.go:182] Loaded profile config "old-k8s-version-521710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1017 21:13:59.903034  814992 cli_runner.go:164] Run: docker container inspect old-k8s-version-521710 --format={{.State.Status}}
	I1017 21:13:59.929493  814992 host.go:66] Checking if "old-k8s-version-521710" exists ...
	I1017 21:13:59.929832  814992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:00.055831  814992 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 21:14:00.039343837 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:00.056554  814992 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-521710 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 21:14:00.060735  814992 out.go:179] * Pausing node old-k8s-version-521710 ... 
	I1017 21:14:00.063971  814992 host.go:66] Checking if "old-k8s-version-521710" exists ...
	I1017 21:14:00.064368  814992 ssh_runner.go:195] Run: systemctl --version
	I1017 21:14:00.064416  814992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-521710
	I1017 21:14:00.103322  814992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33834 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/old-k8s-version-521710/id_rsa Username:docker}
	I1017 21:14:00.228178  814992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:00.247250  814992 pause.go:52] kubelet running: true
	I1017 21:14:00.247333  814992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:00.694993  814992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:00.695086  814992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:00.841312  814992 cri.go:89] found id: "2695b5995422f52d2280a5c04a29a312332fd9c9207c0d9ceb8a4a2415d6f942"
	I1017 21:14:00.841381  814992 cri.go:89] found id: "39d79d60ebfd0b8d8522cbd3f39d40d526cbb7af715741008c8de6a9b84e0697"
	I1017 21:14:00.841416  814992 cri.go:89] found id: "ab89645fc3810009284b0ce9f74a350c1b848246e6f675169b08e6d8a64246d3"
	I1017 21:14:00.841439  814992 cri.go:89] found id: "ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1"
	I1017 21:14:00.841456  814992 cri.go:89] found id: "b8d370901c995c38b064dd226c47b831e35b4405a3fe066e8cf76e1661949864"
	I1017 21:14:00.841490  814992 cri.go:89] found id: "33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45"
	I1017 21:14:00.841511  814992 cri.go:89] found id: "d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038"
	I1017 21:14:00.841529  814992 cri.go:89] found id: "458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a"
	I1017 21:14:00.841546  814992 cri.go:89] found id: "14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d"
	I1017 21:14:00.841584  814992 cri.go:89] found id: "d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	I1017 21:14:00.841606  814992 cri.go:89] found id: "e243983797b3e7a9ac1f797e5ff4593ae3fa1ce937f2f6e8b4adf65a4d116c0b"
	I1017 21:14:00.841624  814992 cri.go:89] found id: ""
	I1017 21:14:00.841704  814992 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:00.854614  814992 retry.go:31] will retry after 270.904869ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:00Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:14:01.126178  814992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:01.146121  814992 pause.go:52] kubelet running: false
	I1017 21:14:01.146285  814992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:01.412665  814992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:01.412752  814992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:01.540213  814992 cri.go:89] found id: "2695b5995422f52d2280a5c04a29a312332fd9c9207c0d9ceb8a4a2415d6f942"
	I1017 21:14:01.540285  814992 cri.go:89] found id: "39d79d60ebfd0b8d8522cbd3f39d40d526cbb7af715741008c8de6a9b84e0697"
	I1017 21:14:01.540304  814992 cri.go:89] found id: "ab89645fc3810009284b0ce9f74a350c1b848246e6f675169b08e6d8a64246d3"
	I1017 21:14:01.540322  814992 cri.go:89] found id: "ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1"
	I1017 21:14:01.540355  814992 cri.go:89] found id: "b8d370901c995c38b064dd226c47b831e35b4405a3fe066e8cf76e1661949864"
	I1017 21:14:01.540374  814992 cri.go:89] found id: "33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45"
	I1017 21:14:01.540389  814992 cri.go:89] found id: "d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038"
	I1017 21:14:01.540436  814992 cri.go:89] found id: "458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a"
	I1017 21:14:01.540457  814992 cri.go:89] found id: "14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d"
	I1017 21:14:01.540479  814992 cri.go:89] found id: "d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	I1017 21:14:01.540496  814992 cri.go:89] found id: "e243983797b3e7a9ac1f797e5ff4593ae3fa1ce937f2f6e8b4adf65a4d116c0b"
	I1017 21:14:01.540525  814992 cri.go:89] found id: ""
	I1017 21:14:01.540606  814992 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:01.552710  814992 retry.go:31] will retry after 513.350602ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:01Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:14:02.066286  814992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:02.091594  814992 pause.go:52] kubelet running: false
	I1017 21:14:02.091718  814992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:02.370106  814992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:02.370185  814992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:02.498106  814992 cri.go:89] found id: "2695b5995422f52d2280a5c04a29a312332fd9c9207c0d9ceb8a4a2415d6f942"
	I1017 21:14:02.498130  814992 cri.go:89] found id: "39d79d60ebfd0b8d8522cbd3f39d40d526cbb7af715741008c8de6a9b84e0697"
	I1017 21:14:02.498136  814992 cri.go:89] found id: "ab89645fc3810009284b0ce9f74a350c1b848246e6f675169b08e6d8a64246d3"
	I1017 21:14:02.498139  814992 cri.go:89] found id: "ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1"
	I1017 21:14:02.498143  814992 cri.go:89] found id: "b8d370901c995c38b064dd226c47b831e35b4405a3fe066e8cf76e1661949864"
	I1017 21:14:02.498146  814992 cri.go:89] found id: "33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45"
	I1017 21:14:02.498149  814992 cri.go:89] found id: "d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038"
	I1017 21:14:02.498152  814992 cri.go:89] found id: "458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a"
	I1017 21:14:02.498155  814992 cri.go:89] found id: "14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d"
	I1017 21:14:02.498161  814992 cri.go:89] found id: "d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	I1017 21:14:02.498165  814992 cri.go:89] found id: "e243983797b3e7a9ac1f797e5ff4593ae3fa1ce937f2f6e8b4adf65a4d116c0b"
	I1017 21:14:02.498167  814992 cri.go:89] found id: ""
	I1017 21:14:02.498215  814992 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:02.516929  814992 out.go:203] 
	W1017 21:14:02.519844  814992 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 21:14:02.520013  814992 out.go:285] * 
	* 
	W1017 21:14:02.529105  814992 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 21:14:02.533985  814992 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-521710 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-521710
helpers_test.go:243: (dbg) docker inspect old-k8s-version-521710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	        "Created": "2025-10-17T21:11:18.645427357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809945,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:12:50.176483915Z",
	            "FinishedAt": "2025-10-17T21:12:48.054805543Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hostname",
	        "HostsPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hosts",
	        "LogPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77-json.log",
	        "Name": "/old-k8s-version-521710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-521710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-521710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	                "LowerDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-521710",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-521710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-521710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45d3ac117a131ca3715a93d2e0081592b37f2bfd4f210cf43b932d901a5583f5",
	            "SandboxKey": "/var/run/docker/netns/45d3ac117a13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-521710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:79:9d:27:df:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0dbd01eef2ecf0cfa290a0ca03fecc2259469a874644e9e5b874fbcdc1b5668f",
	                    "EndpointID": "2d14b910fb7d8e098934dd20a4ff224b536f395eadeb864ccaab6be372ea5208",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-521710",
	                        "35a78dd09101"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710: exit status 2 (560.478713ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25
E1017 21:14:04.416981  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25: (2.006271495s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:13:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:13:51.363957  813510 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:13:51.364091  813510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:51.364102  813510 out.go:374] Setting ErrFile to fd 2...
	I1017 21:13:51.364107  813510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:51.364427  813510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:13:51.364801  813510 out.go:368] Setting JSON to false
	I1017 21:13:51.365837  813510 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14177,"bootTime":1760721454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:13:51.365907  813510 start.go:141] virtualization:  
	I1017 21:13:51.371066  813510 out.go:179] * [no-preload-820018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:13:51.374139  813510 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:13:51.374184  813510 notify.go:220] Checking for updates...
	I1017 21:13:51.387168  813510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:13:51.390292  813510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:51.393255  813510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:13:51.396205  813510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:13:51.399226  813510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:13:51.402923  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:51.403878  813510 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:13:51.429163  813510 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:13:51.429281  813510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:13:51.489857  813510 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:13:51.479853979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:13:51.489971  813510 docker.go:318] overlay module found
	I1017 21:13:51.495051  813510 out.go:179] * Using the docker driver based on existing profile
	I1017 21:13:51.498073  813510 start.go:305] selected driver: docker
	I1017 21:13:51.498088  813510 start.go:925] validating driver "docker" against &{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:51.498195  813510 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:13:51.498930  813510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:13:51.555038  813510 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:13:51.544188913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:13:51.555495  813510 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:13:51.555535  813510 cni.go:84] Creating CNI manager for ""
	I1017 21:13:51.555604  813510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:13:51.555648  813510 start.go:349] cluster config:
	{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:51.558892  813510 out.go:179] * Starting "no-preload-820018" primary control-plane node in "no-preload-820018" cluster
	I1017 21:13:51.561693  813510 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:13:51.564661  813510 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:13:51.567510  813510 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:13:51.567623  813510 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:13:51.567652  813510 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:13:51.567978  813510 cache.go:107] acquiring lock: {Name:mk40b757c19c3c9274f9f5d80ab21002ed44c3fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568064  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 21:13:51.568074  813510 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.786µs
	I1017 21:13:51.568087  813510 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 21:13:51.568099  813510 cache.go:107] acquiring lock: {Name:mkab9c4a8cb8e1bf28dffee17f9a3ed781aeb58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568138  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 21:13:51.568148  813510 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 51.151µs
	I1017 21:13:51.568154  813510 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 21:13:51.568164  813510 cache.go:107] acquiring lock: {Name:mkb0f531469cc497e90953411691aebfea202dba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568198  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 21:13:51.568213  813510 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.8µs
	I1017 21:13:51.568220  813510 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 21:13:51.568235  813510 cache.go:107] acquiring lock: {Name:mkc7975906f97cc89b61c851770f9e445c0bd241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568263  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 21:13:51.568272  813510 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.351µs
	I1017 21:13:51.568278  813510 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 21:13:51.568290  813510 cache.go:107] acquiring lock: {Name:mk7d4188cf80de21ea7a2f21ef7ea3cdd3e61d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568319  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 21:13:51.568328  813510 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.355µs
	I1017 21:13:51.568334  813510 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 21:13:51.568347  813510 cache.go:107] acquiring lock: {Name:mkc7f366c6bc39751a468519a3c4e03edbde6c9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568379  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1017 21:13:51.568389  813510 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.18µs
	I1017 21:13:51.568395  813510 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 21:13:51.568404  813510 cache.go:107] acquiring lock: {Name:mk31f5a4c7a30c2888716a3df14a08c66478a7b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568434  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 21:13:51.568449  813510 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 40.476µs
	I1017 21:13:51.568455  813510 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 21:13:51.568463  813510 cache.go:107] acquiring lock: {Name:mkb38536a5bb91d51d50b4384af5536a1bee04d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568494  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 21:13:51.568502  813510 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 40.074µs
	I1017 21:13:51.568508  813510 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 21:13:51.568513  813510 cache.go:87] Successfully saved all images to host disk.
	I1017 21:13:51.587421  813510 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:13:51.587447  813510 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:13:51.587461  813510 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:13:51.587492  813510 start.go:360] acquireMachinesLock for no-preload-820018: {Name:mk60df73c299cbe0a2eb1abd2d4c927199ea7cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.587548  813510 start.go:364] duration metric: took 35.143µs to acquireMachinesLock for "no-preload-820018"
	I1017 21:13:51.587572  813510 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:13:51.587583  813510 fix.go:54] fixHost starting: 
	I1017 21:13:51.587853  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:51.605748  813510 fix.go:112] recreateIfNeeded on no-preload-820018: state=Stopped err=<nil>
	W1017 21:13:51.605781  813510 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 21:13:51.610820  813510 out.go:252] * Restarting existing docker container for "no-preload-820018" ...
	I1017 21:13:51.610916  813510 cli_runner.go:164] Run: docker start no-preload-820018
	I1017 21:13:51.897548  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:51.918477  813510 kic.go:430] container "no-preload-820018" state is running.
	I1017 21:13:51.918875  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:51.937737  813510 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:13:51.938172  813510 machine.go:93] provisionDockerMachine start ...
	I1017 21:13:51.938251  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:51.960666  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:51.960993  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:51.961004  813510 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:13:51.961723  813510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42128->127.0.0.1:33839: read: connection reset by peer
	I1017 21:13:55.111180  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:13:55.111210  813510 ubuntu.go:182] provisioning hostname "no-preload-820018"
	I1017 21:13:55.111284  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.131014  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.131353  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.131366  813510 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820018 && echo "no-preload-820018" | sudo tee /etc/hostname
	I1017 21:13:55.293242  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:13:55.293400  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.311921  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.312235  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.312269  813510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820018/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:13:55.463703  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:13:55.463728  813510 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:13:55.463748  813510 ubuntu.go:190] setting up certificates
	I1017 21:13:55.463757  813510 provision.go:84] configureAuth start
	I1017 21:13:55.463818  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:55.482409  813510 provision.go:143] copyHostCerts
	I1017 21:13:55.482478  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:13:55.482497  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:13:55.482575  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:13:55.482680  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:13:55.482686  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:13:55.482728  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:13:55.482878  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:13:55.482888  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:13:55.482921  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:13:55.482995  813510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.no-preload-820018 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820018]
	I1017 21:13:55.590300  813510 provision.go:177] copyRemoteCerts
	I1017 21:13:55.590376  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:13:55.590415  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.608944  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:55.715205  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:13:55.736699  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:13:55.755764  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:13:55.774609  813510 provision.go:87] duration metric: took 310.826375ms to configureAuth
	I1017 21:13:55.774634  813510 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:13:55.774828  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:55.774929  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.792104  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.792425  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.792447  813510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:13:56.163764  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:13:56.163791  813510 machine.go:96] duration metric: took 4.225604182s to provisionDockerMachine
	I1017 21:13:56.163803  813510 start.go:293] postStartSetup for "no-preload-820018" (driver="docker")
	I1017 21:13:56.163814  813510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:13:56.163899  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:13:56.163943  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.185006  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.290902  813510 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:13:56.294274  813510 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:13:56.294349  813510 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:13:56.294376  813510 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:13:56.294454  813510 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:13:56.294557  813510 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:13:56.294676  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:13:56.302442  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:13:56.320080  813510 start.go:296] duration metric: took 156.261076ms for postStartSetup
	I1017 21:13:56.320168  813510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:13:56.320212  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.338107  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.440259  813510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:13:56.444856  813510 fix.go:56] duration metric: took 4.85726557s for fixHost
	I1017 21:13:56.444881  813510 start.go:83] releasing machines lock for "no-preload-820018", held for 4.85731975s
	I1017 21:13:56.444947  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:56.462431  813510 ssh_runner.go:195] Run: cat /version.json
	I1017 21:13:56.462486  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.462807  813510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:13:56.462855  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.482586  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.488145  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.691485  813510 ssh_runner.go:195] Run: systemctl --version
	I1017 21:13:56.697780  813510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:13:56.735274  813510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:13:56.740047  813510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:13:56.740126  813510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:13:56.749023  813510 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:13:56.749048  813510 start.go:495] detecting cgroup driver to use...
	I1017 21:13:56.749079  813510 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:13:56.749133  813510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:13:56.765294  813510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:13:56.778030  813510 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:13:56.778104  813510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:13:56.794129  813510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:13:56.807232  813510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:13:56.936253  813510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:13:57.057756  813510 docker.go:234] disabling docker service ...
	I1017 21:13:57.057862  813510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:13:57.075444  813510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:13:57.089334  813510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:13:57.200493  813510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:13:57.327349  813510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:13:57.343364  813510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:13:57.360795  813510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:13:57.360926  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.371201  813510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:13:57.371322  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.381363  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.390883  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.400108  813510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:13:57.408336  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.418081  813510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.426522  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.436168  813510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:13:57.444824  813510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:13:57.452799  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:57.576878  813510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:13:57.723442  813510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:13:57.723564  813510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:13:57.727531  813510 start.go:563] Will wait 60s for crictl version
	I1017 21:13:57.727616  813510 ssh_runner.go:195] Run: which crictl
	I1017 21:13:57.731053  813510 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:13:57.755558  813510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:13:57.755655  813510 ssh_runner.go:195] Run: crio --version
	I1017 21:13:57.784410  813510 ssh_runner.go:195] Run: crio --version
	I1017 21:13:57.823687  813510 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:13:57.826564  813510 cli_runner.go:164] Run: docker network inspect no-preload-820018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:13:57.843057  813510 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:13:57.846713  813510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:13:57.855852  813510 kubeadm.go:883] updating cluster {Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:13:57.855961  813510 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:13:57.856011  813510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:13:57.892542  813510 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:13:57.892567  813510 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:13:57.892576  813510 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 21:13:57.892666  813510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-820018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:13:57.892744  813510 ssh_runner.go:195] Run: crio config
	I1017 21:13:57.967521  813510 cni.go:84] Creating CNI manager for ""
	I1017 21:13:57.967560  813510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:13:57.967584  813510 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:13:57.967609  813510 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820018 NodeName:no-preload-820018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:13:57.967764  813510 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:13:57.967835  813510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:13:57.977827  813510 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:13:57.977959  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:13:57.985799  813510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:13:57.999048  813510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:13:58.014491  813510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 21:13:58.029381  813510 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:13:58.033398  813510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:13:58.045622  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:58.178202  813510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:58.201117  813510 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018 for IP: 192.168.85.2
	I1017 21:13:58.201190  813510 certs.go:195] generating shared ca certs ...
	I1017 21:13:58.201220  813510 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:58.201406  813510 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:13:58.201476  813510 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:13:58.201498  813510 certs.go:257] generating profile certs ...
	I1017 21:13:58.201629  813510 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.key
	I1017 21:13:58.201738  813510 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.key.f89ee78e
	I1017 21:13:58.201802  813510 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.key
	I1017 21:13:58.201947  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:13:58.202003  813510 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:13:58.202032  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:13:58.202087  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:13:58.202140  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:13:58.202204  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:13:58.202274  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:13:58.202915  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:13:58.229014  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:13:58.248709  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:13:58.280192  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:13:58.299041  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:13:58.318709  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:13:58.342931  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:13:58.364628  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:13:58.389540  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:13:58.422003  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:13:58.441170  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:13:58.461730  813510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:13:58.476642  813510 ssh_runner.go:195] Run: openssl version
	I1017 21:13:58.484848  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:13:58.494561  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.498500  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.498576  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.542231  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:13:58.551505  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:13:58.559590  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.563405  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.563485  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.604708  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:13:58.612751  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:13:58.621207  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.625316  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.625382  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.667530  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:13:58.676105  813510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:13:58.680687  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:13:58.723293  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:13:58.764888  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:13:58.806552  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:13:58.854288  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:13:58.900393  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:13:58.955635  813510 kubeadm.go:400] StartCluster: {Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:58.955748  813510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:13:58.955811  813510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:13:58.993468  813510 cri.go:89] found id: ""
	I1017 21:13:58.993577  813510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:13:59.003232  813510 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:13:59.003269  813510 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:13:59.003327  813510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:13:59.016140  813510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:13:59.016782  813510 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-820018" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:59.017066  813510 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-820018" cluster setting kubeconfig missing "no-preload-820018" context setting]
	I1017 21:13:59.017593  813510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.019492  813510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:13:59.028929  813510 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 21:13:59.028973  813510 kubeadm.go:601] duration metric: took 25.687361ms to restartPrimaryControlPlane
	I1017 21:13:59.028983  813510 kubeadm.go:402] duration metric: took 73.358813ms to StartCluster
	I1017 21:13:59.028998  813510 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.029070  813510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:59.030014  813510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.031011  813510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:13:59.031403  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:59.031442  813510 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:13:59.031586  813510 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820018"
	I1017 21:13:59.031601  813510 addons.go:238] Setting addon storage-provisioner=true in "no-preload-820018"
	W1017 21:13:59.031612  813510 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:13:59.031635  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.032094  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.032255  813510 addons.go:69] Setting dashboard=true in profile "no-preload-820018"
	I1017 21:13:59.032276  813510 addons.go:238] Setting addon dashboard=true in "no-preload-820018"
	W1017 21:13:59.032285  813510 addons.go:247] addon dashboard should already be in state true
	I1017 21:13:59.032320  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.032797  813510 addons.go:69] Setting default-storageclass=true in profile "no-preload-820018"
	I1017 21:13:59.032821  813510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820018"
	I1017 21:13:59.033057  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.033308  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.039514  813510 out.go:179] * Verifying Kubernetes components...
	I1017 21:13:59.043263  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:59.072292  813510 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:13:59.075854  813510 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 21:13:59.078819  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:13:59.078844  813510 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:13:59.078920  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.104110  813510 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:13:59.109450  813510 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:59.109474  813510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:13:59.109543  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.111341  813510 addons.go:238] Setting addon default-storageclass=true in "no-preload-820018"
	W1017 21:13:59.111373  813510 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:13:59.111402  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.111874  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.133171  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.161191  813510 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:59.161224  813510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:13:59.161288  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.166514  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.190169  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.462309  813510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:59.550791  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:13:59.550819  813510 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:13:59.573425  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:59.597018  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:59.624519  813510 node_ready.go:35] waiting up to 6m0s for node "no-preload-820018" to be "Ready" ...
	I1017 21:13:59.683982  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:13:59.684004  813510 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:13:59.886154  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:13:59.886178  813510 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:13:59.989920  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:13:59.989939  813510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:14:00.055073  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:14:00.055098  813510 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:14:00.121371  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:14:00.121397  813510 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:14:00.160694  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:14:00.160719  813510 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:14:00.194580  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:14:00.194605  813510 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:14:00.234016  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:14:00.234043  813510 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:14:00.274032  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.553027792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.55981068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.560512548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.576716784Z" level=info msg="Created container d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper" id=dcc4616d-cd3c-4ca3-bc23-01a42e3945c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.577616726Z" level=info msg="Starting container: d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1" id=bef4ca23-aabf-4a3b-b95a-af1f70a25569 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.582729757Z" level=info msg="Started container" PID=1643 containerID=d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper id=bef4ca23-aabf-4a3b-b95a-af1f70a25569 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb
	Oct 17 21:13:43 old-k8s-version-521710 conmon[1641]: conmon d64d85044b4f3103cc0e <ninfo>: container 1643 exited with status 1
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.773959528Z" level=info msg="Removing container: a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.781597738Z" level=info msg="Error loading conmon cgroup of container a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f: cgroup deleted" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.784682015Z" level=info msg="Removed container a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.571634859Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575884616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575922049Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575947551Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.579373099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.57941447Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.579437543Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582622613Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582659906Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582683192Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.585871314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.585904192Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.58593273Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.589042238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.589075674Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d64d85044b4f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   e9236bec97b9c       dashboard-metrics-scraper-5f989dc9cf-56mrn       kubernetes-dashboard
	2695b5995422f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           25 seconds ago       Running             storage-provisioner         2                   ee9980cc74477       storage-provisioner                              kube-system
	e243983797b3e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   32 seconds ago       Running             kubernetes-dashboard        0                   f138f5c708842       kubernetes-dashboard-8694d4445c-66tmt            kubernetes-dashboard
	a7d328b0e4470       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   70bb204381717       busybox                                          default
	39d79d60ebfd0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           55 seconds ago       Running             coredns                     1                   7a7ba431a2d03       coredns-5dd5756b68-vbl7d                         kube-system
	ab89645fc3810       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   d073edb99920b       kindnet-w5t9r                                    kube-system
	ae9791672cbc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           56 seconds ago       Exited              storage-provisioner         1                   ee9980cc74477       storage-provisioner                              kube-system
	b8d370901c995       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           56 seconds ago       Running             kube-proxy                  1                   f834f5ccadc91       kube-proxy-dz7dm                                 kube-system
	33c3db2cf92d2       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   3456de5b89a0b       kube-controller-manager-old-k8s-version-521710   kube-system
	d8c0022c99e83       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   840359d34a5cf       etcd-old-k8s-version-521710                      kube-system
	458627afcf154       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8338d872c892e       kube-scheduler-old-k8s-version-521710            kube-system
	14842547ca451       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   8894f87f1d989       kube-apiserver-old-k8s-version-521710            kube-system
	
	
	==> coredns [39d79d60ebfd0b8d8522cbd3f39d40d526cbb7af715741008c8de6a9b84e0697] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51095 - 44765 "HINFO IN 3452905297381785184.8556009661708522168. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012702953s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-521710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-521710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-521710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_11_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-521710
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:13:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-521710
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f23ef8d1-8109-4c2e-9a15-daa99b3bc5b9
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 coredns-5dd5756b68-vbl7d                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m2s
	  kube-system                 etcd-old-k8s-version-521710                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m16s
	  kube-system                 kindnet-w5t9r                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-old-k8s-version-521710             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-controller-manager-old-k8s-version-521710    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 kube-proxy-dz7dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-old-k8s-version-521710             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-56mrn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-66tmt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m1s                   kube-proxy       
	  Normal  Starting                 54s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m26s (x8 over 2m26s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m16s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m16s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m16s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m16s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m3s                   node-controller  Node old-k8s-version-521710 event: Registered Node old-k8s-version-521710 in Controller
	  Normal  NodeReady                106s                   kubelet          Node old-k8s-version-521710 status is now: NodeReady
	  Normal  Starting                 65s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-521710 event: Registered Node old-k8s-version-521710 in Controller
	
	
	==> dmesg <==
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038] <==
	{"level":"info","ts":"2025-10-17T21:13:00.947399Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T21:13:00.947551Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-17T21:13:00.947865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-17T21:13:00.94793Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-17T21:13:00.948021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:13:00.948048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:13:01.024043Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T21:13:01.024246Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T21:13:01.024273Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T21:13:01.02435Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:13:01.024357Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:13:02.165655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.165962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.165998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.166045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.170852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:13:02.171966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T21:13:02.17952Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:13:02.180485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T21:13:02.180841Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-521710 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T21:13:02.181011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T21:13:02.211524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:14:04 up  3:56,  0 user,  load average: 3.51, 3.74, 3.17
	Linux old-k8s-version-521710 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab89645fc3810009284b0ce9f74a350c1b848246e6f675169b08e6d8a64246d3] <==
	I1017 21:13:08.349566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:13:08.349928       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:13:08.350109       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:13:08.350122       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:13:08.350146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:13:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:13:08.571377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:13:08.571397       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:13:08.571406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:13:08.572084       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:13:38.572039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:13:38.572149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:13:38.572057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:13:38.572185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 21:13:40.172166       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:13:40.172204       1 metrics.go:72] Registering metrics
	I1017 21:13:40.172265       1 controller.go:711] "Syncing nftables rules"
	I1017 21:13:48.571324       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:13:48.571360       1 main.go:301] handling current node
	I1017 21:13:58.577425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:13:58.577532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d] <==
	I1017 21:13:06.719502       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 21:13:06.719527       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 21:13:06.719655       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 21:13:06.719771       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 21:13:06.721357       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:13:06.756731       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 21:13:06.759241       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 21:13:06.768885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:13:06.785589       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 21:13:06.789532       1 aggregator.go:166] initial CRD sync complete...
	I1017 21:13:06.789884       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 21:13:06.789929       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 21:13:06.789965       1 cache.go:39] Caches are synced for autoregister controller
	E1017 21:13:06.873406       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:13:07.373991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:13:09.866146       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 21:13:09.952710       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 21:13:10.017213       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:13:10.044200       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:13:10.060033       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 21:13:10.166738       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.86.35"}
	I1017 21:13:10.228085       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.122.33"}
	I1017 21:13:19.756643       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 21:13:19.787179       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 21:13:19.891337       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45] <==
	I1017 21:13:19.851144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.307665ms"
	I1017 21:13:19.861917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.627962ms"
	I1017 21:13:19.873334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.131883ms"
	I1017 21:13:19.873554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.933µs"
	I1017 21:13:19.876131       1 shared_informer.go:318] Caches are synced for disruption
	I1017 21:13:19.879681       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1017 21:13:19.880159       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 21:13:19.880391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="151.773µs"
	I1017 21:13:19.887943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.702787ms"
	I1017 21:13:19.888068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.586µs"
	I1017 21:13:19.901835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.159µs"
	I1017 21:13:19.924712       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 21:13:19.955907       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 21:13:20.330495       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:13:20.330529       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 21:13:20.334722       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:13:25.721232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.153µs"
	I1017 21:13:26.739827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="510.653µs"
	I1017 21:13:27.751594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.41µs"
	I1017 21:13:31.765761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.34065ms"
	I1017 21:13:31.766239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.464µs"
	I1017 21:13:43.796360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.155µs"
	I1017 21:13:46.014010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.497672ms"
	I1017 21:13:46.014195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.675µs"
	I1017 21:13:51.060124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.119µs"
	
	
	==> kube-proxy [b8d370901c995c38b064dd226c47b831e35b4405a3fe066e8cf76e1661949864] <==
	I1017 21:13:08.419310       1 server_others.go:69] "Using iptables proxy"
	I1017 21:13:08.590878       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 21:13:09.423999       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:13:09.540625       1 server_others.go:152] "Using iptables Proxier"
	I1017 21:13:09.582614       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 21:13:09.582635       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 21:13:09.582678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 21:13:09.582951       1 server.go:846] "Version info" version="v1.28.0"
	I1017 21:13:09.582961       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:13:09.597942       1 config.go:188] "Starting service config controller"
	I1017 21:13:09.597974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 21:13:09.597994       1 config.go:97] "Starting endpoint slice config controller"
	I1017 21:13:09.597997       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 21:13:09.600366       1 config.go:315] "Starting node config controller"
	I1017 21:13:09.600383       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 21:13:09.701358       1 shared_informer.go:318] Caches are synced for service config
	I1017 21:13:09.701431       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 21:13:09.701682       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a] <==
	I1017 21:13:03.186526       1 serving.go:348] Generated self-signed cert in-memory
	W1017 21:13:06.615615       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 21:13:06.615722       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 21:13:06.615756       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 21:13:06.615800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 21:13:06.752423       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 21:13:06.752522       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:13:06.756843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:13:06.756884       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 21:13:06.757345       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 21:13:06.757436       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 21:13:06.858045       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887164     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkjl\" (UniqueName: \"kubernetes.io/projected/155b15b2-18b2-41fd-b2a9-3cff308a5a6d-kube-api-access-ghkjl\") pod \"kubernetes-dashboard-8694d4445c-66tmt\" (UID: \"155b15b2-18b2-41fd-b2a9-3cff308a5a6d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-66tmt"
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887280     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/075072c9-9605-4d9d-9a9d-ab5cfbfb5b51-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-56mrn\" (UID: \"075072c9-9605-4d9d-9a9d-ab5cfbfb5b51\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn"
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887376     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqbj8\" (UniqueName: \"kubernetes.io/projected/075072c9-9605-4d9d-9a9d-ab5cfbfb5b51-kube-api-access-kqbj8\") pod \"dashboard-metrics-scraper-5f989dc9cf-56mrn\" (UID: \"075072c9-9605-4d9d-9a9d-ab5cfbfb5b51\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn"
	Oct 17 21:13:21 old-k8s-version-521710 kubelet[779]: W1017 21:13:21.069261     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/crio-e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb WatchSource:0}: Error finding container e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb: Status 404 returned error can't find the container with id e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb
	Oct 17 21:13:21 old-k8s-version-521710 kubelet[779]: W1017 21:13:21.082743     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/crio-f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115 WatchSource:0}: Error finding container f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115: Status 404 returned error can't find the container with id f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115
	Oct 17 21:13:25 old-k8s-version-521710 kubelet[779]: I1017 21:13:25.707290     779 scope.go:117] "RemoveContainer" containerID="bcdfd482cf2d73e50bd62bb463bb7c8eda8fe2cfd2b4fd900b860ccdae8149ad"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: I1017 21:13:26.717065     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: E1017 21:13:26.718252     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: I1017 21:13:26.720255     779 scope.go:117] "RemoveContainer" containerID="bcdfd482cf2d73e50bd62bb463bb7c8eda8fe2cfd2b4fd900b860ccdae8149ad"
	Oct 17 21:13:27 old-k8s-version-521710 kubelet[779]: I1017 21:13:27.728049     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:27 old-k8s-version-521710 kubelet[779]: E1017 21:13:27.737169     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:31 old-k8s-version-521710 kubelet[779]: I1017 21:13:31.037622     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:31 old-k8s-version-521710 kubelet[779]: E1017 21:13:31.038136     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:38 old-k8s-version-521710 kubelet[779]: I1017 21:13:38.756566     779 scope.go:117] "RemoveContainer" containerID="ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1"
	Oct 17 21:13:38 old-k8s-version-521710 kubelet[779]: I1017 21:13:38.776632     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-66tmt" podStartSLOduration=9.432475378 podCreationTimestamp="2025-10-17 21:13:19 +0000 UTC" firstStartedPulling="2025-10-17 21:13:21.087247572 +0000 UTC m=+21.910778516" lastFinishedPulling="2025-10-17 21:13:31.428674978 +0000 UTC m=+32.252205930" observedRunningTime="2025-10-17 21:13:31.752742134 +0000 UTC m=+32.576273086" watchObservedRunningTime="2025-10-17 21:13:38.773902792 +0000 UTC m=+39.597433736"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.549284     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.771522     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.771804     779 scope.go:117] "RemoveContainer" containerID="d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: E1017 21:13:43.772105     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:51 old-k8s-version-521710 kubelet[779]: I1017 21:13:51.037893     779 scope.go:117] "RemoveContainer" containerID="d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	Oct 17 21:13:51 old-k8s-version-521710 kubelet[779]: E1017 21:13:51.038212     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:14:00 old-k8s-version-521710 kubelet[779]: I1017 21:14:00.628046     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e243983797b3e7a9ac1f797e5ff4593ae3fa1ce937f2f6e8b4adf65a4d116c0b] <==
	2025/10/17 21:13:31 Using namespace: kubernetes-dashboard
	2025/10/17 21:13:31 Using in-cluster config to connect to apiserver
	2025/10/17 21:13:31 Using secret token for csrf signing
	2025/10/17 21:13:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:13:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:13:31 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 21:13:31 Generating JWE encryption key
	2025/10/17 21:13:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:13:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:13:31 Initializing JWE encryption key from synchronized object
	2025/10/17 21:13:31 Creating in-cluster Sidecar client
	2025/10/17 21:13:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:13:31 Serving insecurely on HTTP port: 9090
	2025/10/17 21:14:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:13:31 Starting overwatch
	
	
	==> storage-provisioner [2695b5995422f52d2280a5c04a29a312332fd9c9207c0d9ceb8a4a2415d6f942] <==
	I1017 21:13:38.821165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:13:38.838754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:13:38.838805       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 21:13:56.242117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:13:56.242619       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2adb1101-5e3e-4bb4-b42e-5187960e23fd", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0 became leader
	I1017 21:13:56.242792       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0!
	I1017 21:13:56.343098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0!
	
	
	==> storage-provisioner [ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1] <==
	I1017 21:13:08.235839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:13:38.237861       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-521710 -n old-k8s-version-521710: exit status 2 (589.15883ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-521710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-521710
helpers_test.go:243: (dbg) docker inspect old-k8s-version-521710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	        "Created": "2025-10-17T21:11:18.645427357Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 809945,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:12:50.176483915Z",
	            "FinishedAt": "2025-10-17T21:12:48.054805543Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hostname",
	        "HostsPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/hosts",
	        "LogPath": "/var/lib/docker/containers/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77-json.log",
	        "Name": "/old-k8s-version-521710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-521710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-521710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77",
	                "LowerDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2da747f9f16d29261912175109e75e8257114eb57298badf5e6945057561d990/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-521710",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-521710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-521710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-521710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "45d3ac117a131ca3715a93d2e0081592b37f2bfd4f210cf43b932d901a5583f5",
	            "SandboxKey": "/var/run/docker/netns/45d3ac117a13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33834"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33835"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33838"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33836"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33837"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-521710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:79:9d:27:df:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0dbd01eef2ecf0cfa290a0ca03fecc2259469a874644e9e5b874fbcdc1b5668f",
	                    "EndpointID": "2d14b910fb7d8e098934dd20a4ff224b536f395eadeb864ccaab6be372ea5208",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-521710",
	                        "35a78dd09101"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710: exit status 2 (508.230944ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-521710 logs -n 25: (1.989387127s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:13:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:13:51.363957  813510 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:13:51.364091  813510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:51.364102  813510 out.go:374] Setting ErrFile to fd 2...
	I1017 21:13:51.364107  813510 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:13:51.364427  813510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:13:51.364801  813510 out.go:368] Setting JSON to false
	I1017 21:13:51.365837  813510 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14177,"bootTime":1760721454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:13:51.365907  813510 start.go:141] virtualization:  
	I1017 21:13:51.371066  813510 out.go:179] * [no-preload-820018] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:13:51.374139  813510 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:13:51.374184  813510 notify.go:220] Checking for updates...
	I1017 21:13:51.387168  813510 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:13:51.390292  813510 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:51.393255  813510 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:13:51.396205  813510 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:13:51.399226  813510 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:13:51.402923  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:51.403878  813510 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:13:51.429163  813510 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:13:51.429281  813510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:13:51.489857  813510 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:13:51.479853979 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:13:51.489971  813510 docker.go:318] overlay module found
	I1017 21:13:51.495051  813510 out.go:179] * Using the docker driver based on existing profile
	I1017 21:13:51.498073  813510 start.go:305] selected driver: docker
	I1017 21:13:51.498088  813510 start.go:925] validating driver "docker" against &{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:51.498195  813510 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:13:51.498930  813510 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:13:51.555038  813510 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:13:51.544188913 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:13:51.555495  813510 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:13:51.555535  813510 cni.go:84] Creating CNI manager for ""
	I1017 21:13:51.555604  813510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:13:51.555648  813510 start.go:349] cluster config:
	{Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:51.558892  813510 out.go:179] * Starting "no-preload-820018" primary control-plane node in "no-preload-820018" cluster
	I1017 21:13:51.561693  813510 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:13:51.564661  813510 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:13:51.567510  813510 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:13:51.567623  813510 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:13:51.567652  813510 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:13:51.567978  813510 cache.go:107] acquiring lock: {Name:mk40b757c19c3c9274f9f5d80ab21002ed44c3fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568064  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1017 21:13:51.568074  813510 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 109.786µs
	I1017 21:13:51.568087  813510 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1017 21:13:51.568099  813510 cache.go:107] acquiring lock: {Name:mkab9c4a8cb8e1bf28dffee17f9a3ed781aeb58f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568138  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1017 21:13:51.568148  813510 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 51.151µs
	I1017 21:13:51.568154  813510 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1017 21:13:51.568164  813510 cache.go:107] acquiring lock: {Name:mkb0f531469cc497e90953411691aebfea202dba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568198  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1017 21:13:51.568213  813510 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 43.8µs
	I1017 21:13:51.568220  813510 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1017 21:13:51.568235  813510 cache.go:107] acquiring lock: {Name:mkc7975906f97cc89b61c851770f9e445c0bd241 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568263  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1017 21:13:51.568272  813510 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 38.351µs
	I1017 21:13:51.568278  813510 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1017 21:13:51.568290  813510 cache.go:107] acquiring lock: {Name:mk7d4188cf80de21ea7a2f21ef7ea3cdd3e61d17 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568319  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1017 21:13:51.568328  813510 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 42.355µs
	I1017 21:13:51.568334  813510 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1017 21:13:51.568347  813510 cache.go:107] acquiring lock: {Name:mkc7f366c6bc39751a468519a3c4e03edbde6c9a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568379  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1017 21:13:51.568389  813510 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 47.18µs
	I1017 21:13:51.568395  813510 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1017 21:13:51.568404  813510 cache.go:107] acquiring lock: {Name:mk31f5a4c7a30c2888716a3df14a08c66478a7b1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568434  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1017 21:13:51.568449  813510 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 40.476µs
	I1017 21:13:51.568455  813510 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1017 21:13:51.568463  813510 cache.go:107] acquiring lock: {Name:mkb38536a5bb91d51d50b4384af5536a1bee04d1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.568494  813510 cache.go:115] /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1017 21:13:51.568502  813510 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 40.074µs
	I1017 21:13:51.568508  813510 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1017 21:13:51.568513  813510 cache.go:87] Successfully saved all images to host disk.
	I1017 21:13:51.587421  813510 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:13:51.587447  813510 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:13:51.587461  813510 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:13:51.587492  813510 start.go:360] acquireMachinesLock for no-preload-820018: {Name:mk60df73c299cbe0a2eb1abd2d4c927199ea7cdf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:13:51.587548  813510 start.go:364] duration metric: took 35.143µs to acquireMachinesLock for "no-preload-820018"
	I1017 21:13:51.587572  813510 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:13:51.587583  813510 fix.go:54] fixHost starting: 
	I1017 21:13:51.587853  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:51.605748  813510 fix.go:112] recreateIfNeeded on no-preload-820018: state=Stopped err=<nil>
	W1017 21:13:51.605781  813510 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 21:13:51.610820  813510 out.go:252] * Restarting existing docker container for "no-preload-820018" ...
	I1017 21:13:51.610916  813510 cli_runner.go:164] Run: docker start no-preload-820018
	I1017 21:13:51.897548  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:51.918477  813510 kic.go:430] container "no-preload-820018" state is running.
	I1017 21:13:51.918875  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:51.937737  813510 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/config.json ...
	I1017 21:13:51.938172  813510 machine.go:93] provisionDockerMachine start ...
	I1017 21:13:51.938251  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:51.960666  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:51.960993  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:51.961004  813510 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:13:51.961723  813510 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42128->127.0.0.1:33839: read: connection reset by peer
	I1017 21:13:55.111180  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:13:55.111210  813510 ubuntu.go:182] provisioning hostname "no-preload-820018"
	I1017 21:13:55.111284  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.131014  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.131353  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.131366  813510 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-820018 && echo "no-preload-820018" | sudo tee /etc/hostname
	I1017 21:13:55.293242  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-820018
	
	I1017 21:13:55.293400  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.311921  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.312235  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.312269  813510 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-820018' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-820018/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-820018' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:13:55.463703  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:13:55.463728  813510 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:13:55.463748  813510 ubuntu.go:190] setting up certificates
	I1017 21:13:55.463757  813510 provision.go:84] configureAuth start
	I1017 21:13:55.463818  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:55.482409  813510 provision.go:143] copyHostCerts
	I1017 21:13:55.482478  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:13:55.482497  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:13:55.482575  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:13:55.482680  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:13:55.482686  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:13:55.482728  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:13:55.482878  813510 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:13:55.482888  813510 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:13:55.482921  813510 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:13:55.482995  813510 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.no-preload-820018 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-820018]
	I1017 21:13:55.590300  813510 provision.go:177] copyRemoteCerts
	I1017 21:13:55.590376  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:13:55.590415  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.608944  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:55.715205  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:13:55.736699  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:13:55.755764  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:13:55.774609  813510 provision.go:87] duration metric: took 310.826375ms to configureAuth
	I1017 21:13:55.774634  813510 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:13:55.774828  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:55.774929  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:55.792104  813510 main.go:141] libmachine: Using SSH client type: native
	I1017 21:13:55.792425  813510 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33839 <nil> <nil>}
	I1017 21:13:55.792447  813510 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:13:56.163764  813510 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:13:56.163791  813510 machine.go:96] duration metric: took 4.225604182s to provisionDockerMachine
	I1017 21:13:56.163803  813510 start.go:293] postStartSetup for "no-preload-820018" (driver="docker")
	I1017 21:13:56.163814  813510 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:13:56.163899  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:13:56.163943  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.185006  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.290902  813510 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:13:56.294274  813510 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:13:56.294349  813510 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:13:56.294376  813510 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:13:56.294454  813510 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:13:56.294557  813510 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:13:56.294676  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:13:56.302442  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:13:56.320080  813510 start.go:296] duration metric: took 156.261076ms for postStartSetup
	I1017 21:13:56.320168  813510 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:13:56.320212  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.338107  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.440259  813510 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:13:56.444856  813510 fix.go:56] duration metric: took 4.85726557s for fixHost
	I1017 21:13:56.444881  813510 start.go:83] releasing machines lock for "no-preload-820018", held for 4.85731975s
	I1017 21:13:56.444947  813510 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-820018
	I1017 21:13:56.462431  813510 ssh_runner.go:195] Run: cat /version.json
	I1017 21:13:56.462486  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.462807  813510 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:13:56.462855  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:56.482586  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.488145  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:56.691485  813510 ssh_runner.go:195] Run: systemctl --version
	I1017 21:13:56.697780  813510 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:13:56.735274  813510 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:13:56.740047  813510 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:13:56.740126  813510 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:13:56.749023  813510 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:13:56.749048  813510 start.go:495] detecting cgroup driver to use...
	I1017 21:13:56.749079  813510 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:13:56.749133  813510 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:13:56.765294  813510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:13:56.778030  813510 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:13:56.778104  813510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:13:56.794129  813510 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:13:56.807232  813510 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:13:56.936253  813510 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:13:57.057756  813510 docker.go:234] disabling docker service ...
	I1017 21:13:57.057862  813510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:13:57.075444  813510 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:13:57.089334  813510 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:13:57.200493  813510 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:13:57.327349  813510 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:13:57.343364  813510 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:13:57.360795  813510 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:13:57.360926  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.371201  813510 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:13:57.371322  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.381363  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.390883  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.400108  813510 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:13:57.408336  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.418081  813510 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.426522  813510 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:13:57.436168  813510 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:13:57.444824  813510 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:13:57.452799  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:57.576878  813510 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:13:57.723442  813510 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:13:57.723564  813510 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:13:57.727531  813510 start.go:563] Will wait 60s for crictl version
	I1017 21:13:57.727616  813510 ssh_runner.go:195] Run: which crictl
	I1017 21:13:57.731053  813510 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:13:57.755558  813510 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:13:57.755655  813510 ssh_runner.go:195] Run: crio --version
	I1017 21:13:57.784410  813510 ssh_runner.go:195] Run: crio --version
	I1017 21:13:57.823687  813510 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:13:57.826564  813510 cli_runner.go:164] Run: docker network inspect no-preload-820018 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:13:57.843057  813510 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:13:57.846713  813510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:13:57.855852  813510 kubeadm.go:883] updating cluster {Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:13:57.855961  813510 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:13:57.856011  813510 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:13:57.892542  813510 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:13:57.892567  813510 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:13:57.892576  813510 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1017 21:13:57.892666  813510 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-820018 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:13:57.892744  813510 ssh_runner.go:195] Run: crio config
	I1017 21:13:57.967521  813510 cni.go:84] Creating CNI manager for ""
	I1017 21:13:57.967560  813510 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:13:57.967584  813510 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:13:57.967609  813510 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-820018 NodeName:no-preload-820018 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:13:57.967764  813510 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-820018"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:13:57.967835  813510 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:13:57.977827  813510 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:13:57.977959  813510 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:13:57.985799  813510 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:13:57.999048  813510 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:13:58.014491  813510 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1017 21:13:58.029381  813510 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:13:58.033398  813510 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:13:58.045622  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:58.178202  813510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:58.201117  813510 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018 for IP: 192.168.85.2
	I1017 21:13:58.201190  813510 certs.go:195] generating shared ca certs ...
	I1017 21:13:58.201220  813510 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:58.201406  813510 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:13:58.201476  813510 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:13:58.201498  813510 certs.go:257] generating profile certs ...
	I1017 21:13:58.201629  813510 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.key
	I1017 21:13:58.201738  813510 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.key.f89ee78e
	I1017 21:13:58.201802  813510 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.key
	I1017 21:13:58.201947  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:13:58.202003  813510 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:13:58.202032  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:13:58.202087  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:13:58.202140  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:13:58.202204  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:13:58.202274  813510 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:13:58.202915  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:13:58.229014  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:13:58.248709  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:13:58.280192  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:13:58.299041  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:13:58.318709  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:13:58.342931  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:13:58.364628  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:13:58.389540  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:13:58.422003  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:13:58.441170  813510 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:13:58.461730  813510 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:13:58.476642  813510 ssh_runner.go:195] Run: openssl version
	I1017 21:13:58.484848  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:13:58.494561  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.498500  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.498576  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:13:58.542231  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:13:58.551505  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:13:58.559590  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.563405  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.563485  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:13:58.604708  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:13:58.612751  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:13:58.621207  813510 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.625316  813510 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.625382  813510 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:13:58.667530  813510 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:13:58.676105  813510 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:13:58.680687  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:13:58.723293  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:13:58.764888  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:13:58.806552  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:13:58.854288  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:13:58.900393  813510 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:13:58.955635  813510 kubeadm.go:400] StartCluster: {Name:no-preload-820018 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-820018 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:13:58.955748  813510 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:13:58.955811  813510 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:13:58.993468  813510 cri.go:89] found id: ""
	I1017 21:13:58.993577  813510 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:13:59.003232  813510 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:13:59.003269  813510 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:13:59.003327  813510 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:13:59.016140  813510 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:13:59.016782  813510 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-820018" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:59.017066  813510 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-820018" cluster setting kubeconfig missing "no-preload-820018" context setting]
	I1017 21:13:59.017593  813510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.019492  813510 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:13:59.028929  813510 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 21:13:59.028973  813510 kubeadm.go:601] duration metric: took 25.687361ms to restartPrimaryControlPlane
	I1017 21:13:59.028983  813510 kubeadm.go:402] duration metric: took 73.358813ms to StartCluster
	I1017 21:13:59.028998  813510 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.029070  813510 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:13:59.030014  813510 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:13:59.031011  813510 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:13:59.031403  813510 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:13:59.031442  813510 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:13:59.031586  813510 addons.go:69] Setting storage-provisioner=true in profile "no-preload-820018"
	I1017 21:13:59.031601  813510 addons.go:238] Setting addon storage-provisioner=true in "no-preload-820018"
	W1017 21:13:59.031612  813510 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:13:59.031635  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.032094  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.032255  813510 addons.go:69] Setting dashboard=true in profile "no-preload-820018"
	I1017 21:13:59.032276  813510 addons.go:238] Setting addon dashboard=true in "no-preload-820018"
	W1017 21:13:59.032285  813510 addons.go:247] addon dashboard should already be in state true
	I1017 21:13:59.032320  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.032797  813510 addons.go:69] Setting default-storageclass=true in profile "no-preload-820018"
	I1017 21:13:59.032821  813510 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-820018"
	I1017 21:13:59.033057  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.033308  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.039514  813510 out.go:179] * Verifying Kubernetes components...
	I1017 21:13:59.043263  813510 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:13:59.072292  813510 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:13:59.075854  813510 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 21:13:59.078819  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:13:59.078844  813510 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:13:59.078920  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.104110  813510 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:13:59.109450  813510 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:59.109474  813510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:13:59.109543  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.111341  813510 addons.go:238] Setting addon default-storageclass=true in "no-preload-820018"
	W1017 21:13:59.111373  813510 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:13:59.111402  813510 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:13:59.111874  813510 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:13:59.133171  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.161191  813510 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:59.161224  813510 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:13:59.161288  813510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:13:59.166514  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.190169  813510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:13:59.462309  813510 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:13:59.550791  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:13:59.550819  813510 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:13:59.573425  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:13:59.597018  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:13:59.624519  813510 node_ready.go:35] waiting up to 6m0s for node "no-preload-820018" to be "Ready" ...
	I1017 21:13:59.683982  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:13:59.684004  813510 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:13:59.886154  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:13:59.886178  813510 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:13:59.989920  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:13:59.989939  813510 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:14:00.055073  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:14:00.055098  813510 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:14:00.121371  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:14:00.121397  813510 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:14:00.160694  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:14:00.160719  813510 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:14:00.194580  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:14:00.194605  813510 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:14:00.234016  813510 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:14:00.234043  813510 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:14:00.274032  813510 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> CRI-O <==
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.553027792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.55981068Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.560512548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.576716784Z" level=info msg="Created container d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper" id=dcc4616d-cd3c-4ca3-bc23-01a42e3945c0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.577616726Z" level=info msg="Starting container: d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1" id=bef4ca23-aabf-4a3b-b95a-af1f70a25569 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.582729757Z" level=info msg="Started container" PID=1643 containerID=d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper id=bef4ca23-aabf-4a3b-b95a-af1f70a25569 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb
	Oct 17 21:13:43 old-k8s-version-521710 conmon[1641]: conmon d64d85044b4f3103cc0e <ninfo>: container 1643 exited with status 1
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.773959528Z" level=info msg="Removing container: a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.781597738Z" level=info msg="Error loading conmon cgroup of container a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f: cgroup deleted" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:43 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:43.784682015Z" level=info msg="Removed container a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn/dashboard-metrics-scraper" id=195e9288-a616-4481-b05a-a4b5c8d03548 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.571634859Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575884616Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575922049Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.575947551Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.579373099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.57941447Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.579437543Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582622613Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582659906Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.582683192Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.585871314Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.585904192Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.58593273Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.589042238Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:13:48 old-k8s-version-521710 crio[651]: time="2025-10-17T21:13:48.589075674Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	d64d85044b4f3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago       Exited              dashboard-metrics-scraper   2                   e9236bec97b9c       dashboard-metrics-scraper-5f989dc9cf-56mrn       kubernetes-dashboard
	2695b5995422f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           28 seconds ago       Running             storage-provisioner         2                   ee9980cc74477       storage-provisioner                              kube-system
	e243983797b3e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago       Running             kubernetes-dashboard        0                   f138f5c708842       kubernetes-dashboard-8694d4445c-66tmt            kubernetes-dashboard
	a7d328b0e4470       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           59 seconds ago       Running             busybox                     1                   70bb204381717       busybox                                          default
	39d79d60ebfd0       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           59 seconds ago       Running             coredns                     1                   7a7ba431a2d03       coredns-5dd5756b68-vbl7d                         kube-system
	ab89645fc3810       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           59 seconds ago       Running             kindnet-cni                 1                   d073edb99920b       kindnet-w5t9r                                    kube-system
	ae9791672cbc5       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           59 seconds ago       Exited              storage-provisioner         1                   ee9980cc74477       storage-provisioner                              kube-system
	b8d370901c995       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           59 seconds ago       Running             kube-proxy                  1                   f834f5ccadc91       kube-proxy-dz7dm                                 kube-system
	33c3db2cf92d2       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   3456de5b89a0b       kube-controller-manager-old-k8s-version-521710   kube-system
	d8c0022c99e83       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   840359d34a5cf       etcd-old-k8s-version-521710                      kube-system
	458627afcf154       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   8338d872c892e       kube-scheduler-old-k8s-version-521710            kube-system
	14842547ca451       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   8894f87f1d989       kube-apiserver-old-k8s-version-521710            kube-system
	
	
	==> coredns [39d79d60ebfd0b8d8522cbd3f39d40d526cbb7af715741008c8de6a9b84e0697] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51095 - 44765 "HINFO IN 3452905297381785184.8556009661708522168. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012702953s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-521710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-521710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=old-k8s-version-521710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_11_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:11:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-521710
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:13:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:11:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:13:37 +0000   Fri, 17 Oct 2025 21:12:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-521710
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                f23ef8d1-8109-4c2e-9a15-daa99b3bc5b9
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 coredns-5dd5756b68-vbl7d                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m5s
	  kube-system                 etcd-old-k8s-version-521710                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m19s
	  kube-system                 kindnet-w5t9r                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m6s
	  kube-system                 kube-apiserver-old-k8s-version-521710             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-controller-manager-old-k8s-version-521710    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-proxy-dz7dm                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-scheduler-old-k8s-version-521710             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-56mrn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-66tmt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m4s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m29s (x8 over 2m29s)  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m19s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m19s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m19s                  kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m6s                   node-controller  Node old-k8s-version-521710 event: Registered Node old-k8s-version-521710 in Controller
	  Normal  NodeReady                109s                   kubelet          Node old-k8s-version-521710 status is now: NodeReady
	  Normal  Starting                 68s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  68s (x8 over 68s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    68s (x8 over 68s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     68s (x8 over 68s)      kubelet          Node old-k8s-version-521710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                    node-controller  Node old-k8s-version-521710 event: Registered Node old-k8s-version-521710 in Controller
	
	
	==> dmesg <==
	[Oct17 20:49] overlayfs: idmapped layers are currently not supported
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d8c0022c99e83827d5965d19899287f1d3738a9e3099f2c25aa7d58906b43038] <==
	{"level":"info","ts":"2025-10-17T21:13:00.947399Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-10-17T21:13:00.947551Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-10-17T21:13:00.947865Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-10-17T21:13:00.94793Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-10-17T21:13:00.948021Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:13:00.948048Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-17T21:13:01.024043Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-17T21:13:01.024246Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-17T21:13:01.024273Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-17T21:13:01.02435Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:13:01.024357Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-10-17T21:13:02.165655Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165802Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165868Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-10-17T21:13:02.165908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.165962Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.165998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.166045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-10-17T21:13:02.170852Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:13:02.171966Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-10-17T21:13:02.17952Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-17T21:13:02.180485Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-17T21:13:02.180841Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-521710 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-17T21:13:02.181011Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-17T21:13:02.211524Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:14:07 up  3:56,  0 user,  load average: 3.95, 3.83, 3.20
	Linux old-k8s-version-521710 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ab89645fc3810009284b0ce9f74a350c1b848246e6f675169b08e6d8a64246d3] <==
	I1017 21:13:08.349566       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:13:08.349928       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:13:08.350109       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:13:08.350122       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:13:08.350146       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:13:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:13:08.571377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:13:08.571397       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:13:08.571406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:13:08.572084       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:13:38.572039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:13:38.572149       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:13:38.572057       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:13:38.572185       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1017 21:13:40.172166       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:13:40.172204       1 metrics.go:72] Registering metrics
	I1017 21:13:40.172265       1 controller.go:711] "Syncing nftables rules"
	I1017 21:13:48.571324       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:13:48.571360       1 main.go:301] handling current node
	I1017 21:13:58.577425       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:13:58.577532       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14842547ca451bc762ec2361f0a9c32c4c44d542fc8e0cf608c131f4deba223d] <==
	I1017 21:13:06.719502       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1017 21:13:06.719527       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1017 21:13:06.719655       1 shared_informer.go:318] Caches are synced for configmaps
	I1017 21:13:06.719771       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1017 21:13:06.721357       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:13:06.756731       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1017 21:13:06.759241       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1017 21:13:06.768885       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:13:06.785589       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1017 21:13:06.789532       1 aggregator.go:166] initial CRD sync complete...
	I1017 21:13:06.789884       1 autoregister_controller.go:141] Starting autoregister controller
	I1017 21:13:06.789929       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 21:13:06.789965       1 cache.go:39] Caches are synced for autoregister controller
	E1017 21:13:06.873406       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:13:07.373991       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:13:09.866146       1 controller.go:624] quota admission added evaluator for: namespaces
	I1017 21:13:09.952710       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1017 21:13:10.017213       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:13:10.044200       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:13:10.060033       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1017 21:13:10.166738       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.86.35"}
	I1017 21:13:10.228085       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.122.33"}
	I1017 21:13:19.756643       1 controller.go:624] quota admission added evaluator for: endpoints
	I1017 21:13:19.787179       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1017 21:13:19.891337       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [33c3db2cf92d215a82bbd2aff4ceb0af7e70dc0a10fb06745dfe4ad62a94ce45] <==
	I1017 21:13:19.851144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="56.307665ms"
	I1017 21:13:19.861917       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.627962ms"
	I1017 21:13:19.873334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.131883ms"
	I1017 21:13:19.873554       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.933µs"
	I1017 21:13:19.876131       1 shared_informer.go:318] Caches are synced for disruption
	I1017 21:13:19.879681       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1017 21:13:19.880159       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1017 21:13:19.880391       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="151.773µs"
	I1017 21:13:19.887943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="25.702787ms"
	I1017 21:13:19.888068       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.586µs"
	I1017 21:13:19.901835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.159µs"
	I1017 21:13:19.924712       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 21:13:19.955907       1 shared_informer.go:318] Caches are synced for resource quota
	I1017 21:13:20.330495       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:13:20.330529       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1017 21:13:20.334722       1 shared_informer.go:318] Caches are synced for garbage collector
	I1017 21:13:25.721232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="60.153µs"
	I1017 21:13:26.739827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="510.653µs"
	I1017 21:13:27.751594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.41µs"
	I1017 21:13:31.765761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.34065ms"
	I1017 21:13:31.766239       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="75.464µs"
	I1017 21:13:43.796360       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="46.155µs"
	I1017 21:13:46.014010       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.497672ms"
	I1017 21:13:46.014195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.675µs"
	I1017 21:13:51.060124       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.119µs"
	
	
	==> kube-proxy [b8d370901c995c38b064dd226c47b831e35b4405a3fe066e8cf76e1661949864] <==
	I1017 21:13:08.419310       1 server_others.go:69] "Using iptables proxy"
	I1017 21:13:08.590878       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1017 21:13:09.423999       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:13:09.540625       1 server_others.go:152] "Using iptables Proxier"
	I1017 21:13:09.582614       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1017 21:13:09.582635       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1017 21:13:09.582678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1017 21:13:09.582951       1 server.go:846] "Version info" version="v1.28.0"
	I1017 21:13:09.582961       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:13:09.597942       1 config.go:188] "Starting service config controller"
	I1017 21:13:09.597974       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1017 21:13:09.597994       1 config.go:97] "Starting endpoint slice config controller"
	I1017 21:13:09.597997       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1017 21:13:09.600366       1 config.go:315] "Starting node config controller"
	I1017 21:13:09.600383       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1017 21:13:09.701358       1 shared_informer.go:318] Caches are synced for service config
	I1017 21:13:09.701431       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1017 21:13:09.701682       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [458627afcf1547fcfe3e59bdd03098f604663cbcf6e3839ca89e6efa3c90197a] <==
	I1017 21:13:03.186526       1 serving.go:348] Generated self-signed cert in-memory
	W1017 21:13:06.615615       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1017 21:13:06.615722       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1017 21:13:06.615756       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1017 21:13:06.615800       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1017 21:13:06.752423       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1017 21:13:06.752522       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:13:06.756843       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:13:06.756884       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1017 21:13:06.757345       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1017 21:13:06.757436       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1017 21:13:06.858045       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887164     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghkjl\" (UniqueName: \"kubernetes.io/projected/155b15b2-18b2-41fd-b2a9-3cff308a5a6d-kube-api-access-ghkjl\") pod \"kubernetes-dashboard-8694d4445c-66tmt\" (UID: \"155b15b2-18b2-41fd-b2a9-3cff308a5a6d\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-66tmt"
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887280     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/075072c9-9605-4d9d-9a9d-ab5cfbfb5b51-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-56mrn\" (UID: \"075072c9-9605-4d9d-9a9d-ab5cfbfb5b51\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn"
	Oct 17 21:13:19 old-k8s-version-521710 kubelet[779]: I1017 21:13:19.887376     779 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqbj8\" (UniqueName: \"kubernetes.io/projected/075072c9-9605-4d9d-9a9d-ab5cfbfb5b51-kube-api-access-kqbj8\") pod \"dashboard-metrics-scraper-5f989dc9cf-56mrn\" (UID: \"075072c9-9605-4d9d-9a9d-ab5cfbfb5b51\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn"
	Oct 17 21:13:21 old-k8s-version-521710 kubelet[779]: W1017 21:13:21.069261     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/crio-e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb WatchSource:0}: Error finding container e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb: Status 404 returned error can't find the container with id e9236bec97b9cc1629f6dc724613ee968f95dde16fcffaf8ffb9a48f9ab6d9bb
	Oct 17 21:13:21 old-k8s-version-521710 kubelet[779]: W1017 21:13:21.082743     779 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/35a78dd091010d1ef5b4d67506c02669b35f9871fb9529e95054e2e284b93d77/crio-f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115 WatchSource:0}: Error finding container f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115: Status 404 returned error can't find the container with id f138f5c70884207f7392b58d26742098987cf83555d78401d361cac1feb2b115
	Oct 17 21:13:25 old-k8s-version-521710 kubelet[779]: I1017 21:13:25.707290     779 scope.go:117] "RemoveContainer" containerID="bcdfd482cf2d73e50bd62bb463bb7c8eda8fe2cfd2b4fd900b860ccdae8149ad"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: I1017 21:13:26.717065     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: E1017 21:13:26.718252     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:26 old-k8s-version-521710 kubelet[779]: I1017 21:13:26.720255     779 scope.go:117] "RemoveContainer" containerID="bcdfd482cf2d73e50bd62bb463bb7c8eda8fe2cfd2b4fd900b860ccdae8149ad"
	Oct 17 21:13:27 old-k8s-version-521710 kubelet[779]: I1017 21:13:27.728049     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:27 old-k8s-version-521710 kubelet[779]: E1017 21:13:27.737169     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:31 old-k8s-version-521710 kubelet[779]: I1017 21:13:31.037622     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:31 old-k8s-version-521710 kubelet[779]: E1017 21:13:31.038136     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:38 old-k8s-version-521710 kubelet[779]: I1017 21:13:38.756566     779 scope.go:117] "RemoveContainer" containerID="ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1"
	Oct 17 21:13:38 old-k8s-version-521710 kubelet[779]: I1017 21:13:38.776632     779 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-66tmt" podStartSLOduration=9.432475378 podCreationTimestamp="2025-10-17 21:13:19 +0000 UTC" firstStartedPulling="2025-10-17 21:13:21.087247572 +0000 UTC m=+21.910778516" lastFinishedPulling="2025-10-17 21:13:31.428674978 +0000 UTC m=+32.252205930" observedRunningTime="2025-10-17 21:13:31.752742134 +0000 UTC m=+32.576273086" watchObservedRunningTime="2025-10-17 21:13:38.773902792 +0000 UTC m=+39.597433736"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.549284     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.771522     779 scope.go:117] "RemoveContainer" containerID="a134e7405c1ade8095882c42ef8be19e74299b644c654e65b3823e09cab8587f"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: I1017 21:13:43.771804     779 scope.go:117] "RemoveContainer" containerID="d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	Oct 17 21:13:43 old-k8s-version-521710 kubelet[779]: E1017 21:13:43.772105     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:13:51 old-k8s-version-521710 kubelet[779]: I1017 21:13:51.037893     779 scope.go:117] "RemoveContainer" containerID="d64d85044b4f3103cc0e6b0b3601c71c90aefcb186e406e7f2d3dcda019658b1"
	Oct 17 21:13:51 old-k8s-version-521710 kubelet[779]: E1017 21:13:51.038212     779 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-56mrn_kubernetes-dashboard(075072c9-9605-4d9d-9a9d-ab5cfbfb5b51)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-56mrn" podUID="075072c9-9605-4d9d-9a9d-ab5cfbfb5b51"
	Oct 17 21:14:00 old-k8s-version-521710 kubelet[779]: I1017 21:14:00.628046     779 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:14:00 old-k8s-version-521710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e243983797b3e7a9ac1f797e5ff4593ae3fa1ce937f2f6e8b4adf65a4d116c0b] <==
	2025/10/17 21:13:31 Using namespace: kubernetes-dashboard
	2025/10/17 21:13:31 Using in-cluster config to connect to apiserver
	2025/10/17 21:13:31 Using secret token for csrf signing
	2025/10/17 21:13:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:13:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:13:31 Successful initial request to the apiserver, version: v1.28.0
	2025/10/17 21:13:31 Generating JWE encryption key
	2025/10/17 21:13:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:13:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:13:31 Initializing JWE encryption key from synchronized object
	2025/10/17 21:13:31 Creating in-cluster Sidecar client
	2025/10/17 21:13:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:13:31 Serving insecurely on HTTP port: 9090
	2025/10/17 21:14:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:13:31 Starting overwatch
	
	
	==> storage-provisioner [2695b5995422f52d2280a5c04a29a312332fd9c9207c0d9ceb8a4a2415d6f942] <==
	I1017 21:13:38.821165       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:13:38.838754       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:13:38.838805       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1017 21:13:56.242117       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:13:56.242619       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2adb1101-5e3e-4bb4-b42e-5187960e23fd", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0 became leader
	I1017 21:13:56.242792       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0!
	I1017 21:13:56.343098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-521710_5bc07afd-7135-4e95-8c18-e2258c9083e0!
	
	
	==> storage-provisioner [ae9791672cbc597a9224e2c217de2eae8f4b6588d750df7cca580e9748e14fc1] <==
	I1017 21:13:08.235839       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:13:38.237861       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-521710 -n old-k8s-version-521710
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-521710 -n old-k8s-version-521710: exit status 2 (544.933014ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-521710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (9.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-820018 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-820018 --alsologtostderr -v=1: exit status 80 (2.317471885s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-820018 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 21:14:54.380431  819488 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:14:54.380623  819488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:54.380655  819488 out.go:374] Setting ErrFile to fd 2...
	I1017 21:14:54.380677  819488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:54.380956  819488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:14:54.381254  819488 out.go:368] Setting JSON to false
	I1017 21:14:54.381315  819488 mustload.go:65] Loading cluster: no-preload-820018
	I1017 21:14:54.381730  819488 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:54.382225  819488 cli_runner.go:164] Run: docker container inspect no-preload-820018 --format={{.State.Status}}
	I1017 21:14:54.401906  819488 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:14:54.402224  819488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:54.462450  819488 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 21:14:54.452979057 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:54.463244  819488 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-820018 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 21:14:54.466718  819488 out.go:179] * Pausing node no-preload-820018 ... 
	I1017 21:14:54.470449  819488 host.go:66] Checking if "no-preload-820018" exists ...
	I1017 21:14:54.470830  819488 ssh_runner.go:195] Run: systemctl --version
	I1017 21:14:54.470892  819488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-820018
	I1017 21:14:54.488677  819488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33839 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/no-preload-820018/id_rsa Username:docker}
	I1017 21:14:54.596561  819488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:54.624987  819488 pause.go:52] kubelet running: true
	I1017 21:14:54.625068  819488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:54.886816  819488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:54.886906  819488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:54.959376  819488 cri.go:89] found id: "488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a"
	I1017 21:14:54.959400  819488 cri.go:89] found id: "5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710"
	I1017 21:14:54.959405  819488 cri.go:89] found id: "22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503"
	I1017 21:14:54.959409  819488 cri.go:89] found id: "726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231"
	I1017 21:14:54.959412  819488 cri.go:89] found id: "2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	I1017 21:14:54.959416  819488 cri.go:89] found id: "5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f"
	I1017 21:14:54.959421  819488 cri.go:89] found id: "26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a"
	I1017 21:14:54.959448  819488 cri.go:89] found id: "b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1"
	I1017 21:14:54.959457  819488 cri.go:89] found id: "2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead"
	I1017 21:14:54.959464  819488 cri.go:89] found id: "e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	I1017 21:14:54.959475  819488 cri.go:89] found id: "e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c"
	I1017 21:14:54.959479  819488 cri.go:89] found id: ""
	I1017 21:14:54.959553  819488 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:54.970869  819488 retry.go:31] will retry after 197.849471ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:54Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:14:55.169375  819488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:55.182912  819488 pause.go:52] kubelet running: false
	I1017 21:14:55.182980  819488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:55.360493  819488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:55.360590  819488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:55.428675  819488 cri.go:89] found id: "488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a"
	I1017 21:14:55.428747  819488 cri.go:89] found id: "5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710"
	I1017 21:14:55.428767  819488 cri.go:89] found id: "22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503"
	I1017 21:14:55.428778  819488 cri.go:89] found id: "726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231"
	I1017 21:14:55.428782  819488 cri.go:89] found id: "2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	I1017 21:14:55.428786  819488 cri.go:89] found id: "5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f"
	I1017 21:14:55.428789  819488 cri.go:89] found id: "26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a"
	I1017 21:14:55.428793  819488 cri.go:89] found id: "b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1"
	I1017 21:14:55.428796  819488 cri.go:89] found id: "2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead"
	I1017 21:14:55.428802  819488 cri.go:89] found id: "e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	I1017 21:14:55.428805  819488 cri.go:89] found id: "e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c"
	I1017 21:14:55.428817  819488 cri.go:89] found id: ""
	I1017 21:14:55.428876  819488 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:55.440534  819488 retry.go:31] will retry after 305.617174ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:55Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:14:55.747117  819488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:55.762346  819488 pause.go:52] kubelet running: false
	I1017 21:14:55.762436  819488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:55.951185  819488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:55.951276  819488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:56.033448  819488 cri.go:89] found id: "488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a"
	I1017 21:14:56.033474  819488 cri.go:89] found id: "5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710"
	I1017 21:14:56.033480  819488 cri.go:89] found id: "22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503"
	I1017 21:14:56.033484  819488 cri.go:89] found id: "726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231"
	I1017 21:14:56.033488  819488 cri.go:89] found id: "2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	I1017 21:14:56.033492  819488 cri.go:89] found id: "5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f"
	I1017 21:14:56.033495  819488 cri.go:89] found id: "26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a"
	I1017 21:14:56.033498  819488 cri.go:89] found id: "b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1"
	I1017 21:14:56.033502  819488 cri.go:89] found id: "2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead"
	I1017 21:14:56.033510  819488 cri.go:89] found id: "e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	I1017 21:14:56.033514  819488 cri.go:89] found id: "e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c"
	I1017 21:14:56.033517  819488 cri.go:89] found id: ""
	I1017 21:14:56.033595  819488 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:56.045863  819488 retry.go:31] will retry after 303.88017ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:56Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:14:56.350409  819488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:14:56.363365  819488 pause.go:52] kubelet running: false
	I1017 21:14:56.363451  819488 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:14:56.534528  819488 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:14:56.534603  819488 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:14:56.612294  819488 cri.go:89] found id: "488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a"
	I1017 21:14:56.612326  819488 cri.go:89] found id: "5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710"
	I1017 21:14:56.612331  819488 cri.go:89] found id: "22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503"
	I1017 21:14:56.612335  819488 cri.go:89] found id: "726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231"
	I1017 21:14:56.612338  819488 cri.go:89] found id: "2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	I1017 21:14:56.612342  819488 cri.go:89] found id: "5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f"
	I1017 21:14:56.612345  819488 cri.go:89] found id: "26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a"
	I1017 21:14:56.612370  819488 cri.go:89] found id: "b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1"
	I1017 21:14:56.612374  819488 cri.go:89] found id: "2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead"
	I1017 21:14:56.612381  819488 cri.go:89] found id: "e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	I1017 21:14:56.612384  819488 cri.go:89] found id: "e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c"
	I1017 21:14:56.612387  819488 cri.go:89] found id: ""
	I1017 21:14:56.612452  819488 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:14:56.627492  819488 out.go:203] 
	W1017 21:14:56.630415  819488 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:14:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 21:14:56.630496  819488 out.go:285] * 
	* 
	W1017 21:14:56.639320  819488 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 21:14:56.642926  819488 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-820018 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-820018
helpers_test.go:243: (dbg) docker inspect no-preload-820018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	        "Created": "2025-10-17T21:12:18.108117414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 813640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:13:51.645292904Z",
	            "FinishedAt": "2025-10-17T21:13:50.778357092Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hostname",
	        "HostsPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hosts",
	        "LogPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589-json.log",
	        "Name": "/no-preload-820018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-820018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-820018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	                "LowerDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-820018",
	                "Source": "/var/lib/docker/volumes/no-preload-820018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-820018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-820018",
	                "name.minikube.sigs.k8s.io": "no-preload-820018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "354ab397ea11fe050eceb0ac4f0957890990f5a57b2df22cf1836e73ff286dcd",
	            "SandboxKey": "/var/run/docker/netns/354ab397ea11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-820018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:20:c3:50:f4:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5060c6ac5e7d3e19dab985b5302ecd4b006296949593ffc066761654983bbcd9",
	                    "EndpointID": "2d6e65155d23f2dec2888df2353e03cbbff95a63a20999abce77aa11f16dedbf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-820018",
	                        "9842fccb0456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018: exit status 2 (361.018668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-820018 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-820018 logs -n 25: (1.352218502s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583     │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:14:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:14:12.482546  816637 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:14:12.482667  816637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:12.482678  816637 out.go:374] Setting ErrFile to fd 2...
	I1017 21:14:12.482682  816637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:12.482939  816637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:14:12.483389  816637 out.go:368] Setting JSON to false
	I1017 21:14:12.484349  816637 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14198,"bootTime":1760721454,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:14:12.484420  816637 start.go:141] virtualization:  
	I1017 21:14:12.487979  816637 out.go:179] * [embed-certs-629583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:14:12.491065  816637 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:14:12.491156  816637 notify.go:220] Checking for updates...
	I1017 21:14:12.498295  816637 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:14:12.501146  816637 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:14:12.503992  816637 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:14:12.506840  816637 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:14:12.509667  816637 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:14:12.513125  816637 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:12.513233  816637 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:14:12.548048  816637 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:14:12.548212  816637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:12.608138  816637 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 21:14:12.598948406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:12.608265  816637 docker.go:318] overlay module found
	I1017 21:14:12.611430  816637 out.go:179] * Using the docker driver based on user configuration
	I1017 21:14:12.614277  816637 start.go:305] selected driver: docker
	I1017 21:14:12.614298  816637 start.go:925] validating driver "docker" against <nil>
	I1017 21:14:12.614313  816637 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:14:12.615057  816637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:12.695237  816637 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 21:14:12.683189536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:12.695402  816637 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 21:14:12.695633  816637 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:14:12.698630  816637 out.go:179] * Using Docker driver with root privileges
	I1017 21:14:12.701504  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:12.701573  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:12.701585  816637 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:14:12.701670  816637 start.go:349] cluster config:
	{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:14:12.704594  816637 out.go:179] * Starting "embed-certs-629583" primary control-plane node in "embed-certs-629583" cluster
	I1017 21:14:12.707457  816637 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:14:12.710332  816637 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:14:12.713248  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:12.713301  816637 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:14:12.713314  816637 cache.go:58] Caching tarball of preloaded images
	I1017 21:14:12.713340  816637 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:14:12.713408  816637 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:14:12.713431  816637 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:14:12.713531  816637 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:14:12.713552  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json: {Name:mk6a1238dd71845769fa9266f3bd52e2343a2974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:12.733029  816637 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:14:12.733054  816637 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:14:12.733073  816637 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:14:12.733095  816637 start.go:360] acquireMachinesLock for embed-certs-629583: {Name:mk04401a4732e984651d3d859464878000ecb8c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:14:12.733222  816637 start.go:364] duration metric: took 106.102µs to acquireMachinesLock for "embed-certs-629583"
	I1017 21:14:12.733253  816637 start.go:93] Provisioning new machine with config: &{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:14:12.733333  816637 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:14:12.001394  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:14.449862  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:12.736602  816637 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:14:12.736847  816637 start.go:159] libmachine.API.Create for "embed-certs-629583" (driver="docker")
	I1017 21:14:12.736900  816637 client.go:168] LocalClient.Create starting
	I1017 21:14:12.737000  816637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:14:12.737043  816637 main.go:141] libmachine: Decoding PEM data...
	I1017 21:14:12.737064  816637 main.go:141] libmachine: Parsing certificate...
	I1017 21:14:12.737118  816637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:14:12.737143  816637 main.go:141] libmachine: Decoding PEM data...
	I1017 21:14:12.737156  816637 main.go:141] libmachine: Parsing certificate...
	I1017 21:14:12.737517  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:14:12.753957  816637 cli_runner.go:211] docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:14:12.754036  816637 network_create.go:284] running [docker network inspect embed-certs-629583] to gather additional debugging logs...
	I1017 21:14:12.754057  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583
	W1017 21:14:12.770736  816637 cli_runner.go:211] docker network inspect embed-certs-629583 returned with exit code 1
	I1017 21:14:12.770775  816637 network_create.go:287] error running [docker network inspect embed-certs-629583]: docker network inspect embed-certs-629583: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-629583 not found
	I1017 21:14:12.770790  816637 network_create.go:289] output of [docker network inspect embed-certs-629583]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-629583 not found
	
	** /stderr **
	I1017 21:14:12.770887  816637 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:14:12.788743  816637 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:14:12.789136  816637 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:14:12.789440  816637 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:14:12.789905  816637 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a33520}
	I1017 21:14:12.789931  816637 network_create.go:124] attempt to create docker network embed-certs-629583 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 21:14:12.789991  816637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-629583 embed-certs-629583
	I1017 21:14:12.853748  816637 network_create.go:108] docker network embed-certs-629583 192.168.76.0/24 created
	I1017 21:14:12.853784  816637 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-629583" container
	I1017 21:14:12.853859  816637 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:14:12.869586  816637 cli_runner.go:164] Run: docker volume create embed-certs-629583 --label name.minikube.sigs.k8s.io=embed-certs-629583 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:14:12.891182  816637 oci.go:103] Successfully created a docker volume embed-certs-629583
	I1017 21:14:12.891337  816637 cli_runner.go:164] Run: docker run --rm --name embed-certs-629583-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629583 --entrypoint /usr/bin/test -v embed-certs-629583:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:14:13.467300  816637 oci.go:107] Successfully prepared a docker volume embed-certs-629583
	I1017 21:14:13.467363  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:13.467383  816637 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:14:13.467480  816637 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:14:16.452202  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:18.977967  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:19.681520  816637 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.214000883s)
	I1017 21:14:19.681553  816637 kic.go:203] duration metric: took 6.214165988s to extract preloaded images to volume ...
	W1017 21:14:19.681698  816637 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:14:19.681812  816637 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:14:19.760881  816637 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-629583 --name embed-certs-629583 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629583 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-629583 --network embed-certs-629583 --ip 192.168.76.2 --volume embed-certs-629583:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:14:20.115513  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Running}}
	I1017 21:14:20.138221  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:20.167278  816637 cli_runner.go:164] Run: docker exec embed-certs-629583 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:14:20.233055  816637 oci.go:144] the created container "embed-certs-629583" has a running status.
	I1017 21:14:20.233094  816637 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa...
	I1017 21:14:21.615329  816637 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:14:21.641646  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:21.669049  816637 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:14:21.669068  816637 kic_runner.go:114] Args: [docker exec --privileged embed-certs-629583 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:14:21.750758  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:21.779546  816637 machine.go:93] provisionDockerMachine start ...
	I1017 21:14:21.779653  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:21.810155  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:21.810498  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:21.810515  816637 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:14:21.811089  816637 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53596->127.0.0.1:33844: read: connection reset by peer
	W1017 21:14:21.449982  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:23.948947  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:25.949534  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:24.978889  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:14:24.978912  816637 ubuntu.go:182] provisioning hostname "embed-certs-629583"
	I1017 21:14:24.978974  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.012594  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.012902  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.012913  816637 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629583 && echo "embed-certs-629583" | sudo tee /etc/hostname
	I1017 21:14:25.189999  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:14:25.190090  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.213821  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.214126  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.214142  816637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:14:25.371261  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:14:25.371295  816637 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:14:25.371321  816637 ubuntu.go:190] setting up certificates
	I1017 21:14:25.371331  816637 provision.go:84] configureAuth start
	I1017 21:14:25.371405  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:25.398563  816637 provision.go:143] copyHostCerts
	I1017 21:14:25.398628  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:14:25.398637  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:14:25.398722  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:14:25.398808  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:14:25.398814  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:14:25.398841  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:14:25.398899  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:14:25.398903  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:14:25.398929  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:14:25.398975  816637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629583 san=[127.0.0.1 192.168.76.2 embed-certs-629583 localhost minikube]
	I1017 21:14:25.666653  816637 provision.go:177] copyRemoteCerts
	I1017 21:14:25.666732  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:14:25.666778  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.688302  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:25.798879  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:14:25.818499  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:14:25.844255  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:14:25.875261  816637 provision.go:87] duration metric: took 503.903304ms to configureAuth
	I1017 21:14:25.875285  816637 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:14:25.875466  816637 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:25.875573  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.900869  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.901181  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.901200  816637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:14:26.246433  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:14:26.246498  816637 machine.go:96] duration metric: took 4.466925554s to provisionDockerMachine
	I1017 21:14:26.246526  816637 client.go:171] duration metric: took 13.509613225s to LocalClient.Create
	I1017 21:14:26.246556  816637 start.go:167] duration metric: took 13.509714519s to libmachine.API.Create "embed-certs-629583"
	I1017 21:14:26.246599  816637 start.go:293] postStartSetup for "embed-certs-629583" (driver="docker")
	I1017 21:14:26.246627  816637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:14:26.246730  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:14:26.246774  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.265883  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.371310  816637 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:14:26.374825  816637 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:14:26.374849  816637 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:14:26.374864  816637 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:14:26.374917  816637 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:14:26.374996  816637 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:14:26.375132  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:14:26.382389  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:14:26.401627  816637 start.go:296] duration metric: took 154.995689ms for postStartSetup
	I1017 21:14:26.401988  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:26.418450  816637 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:14:26.418744  816637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:14:26.418783  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.438238  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.541895  816637 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:14:26.546617  816637 start.go:128] duration metric: took 13.813270763s to createHost
	I1017 21:14:26.546642  816637 start.go:83] releasing machines lock for "embed-certs-629583", held for 13.813407823s
	I1017 21:14:26.546750  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:26.564665  816637 ssh_runner.go:195] Run: cat /version.json
	I1017 21:14:26.564704  816637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:14:26.564715  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.564756  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.594108  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.602591  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.710743  816637 ssh_runner.go:195] Run: systemctl --version
	I1017 21:14:26.799353  816637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:14:26.838176  816637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:14:26.843491  816637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:14:26.843595  816637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:14:26.872078  816637 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:14:26.872098  816637 start.go:495] detecting cgroup driver to use...
	I1017 21:14:26.872130  816637 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:14:26.872179  816637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:14:26.890480  816637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:14:26.903526  816637 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:14:26.903586  816637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:14:26.921240  816637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:14:26.939725  816637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:14:27.078584  816637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:14:27.210534  816637 docker.go:234] disabling docker service ...
	I1017 21:14:27.210600  816637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:14:27.239000  816637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:14:27.254897  816637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:14:27.394283  816637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:14:27.514424  816637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:14:27.529717  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:14:27.545071  816637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:14:27.545183  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.554910  816637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:14:27.555027  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.564058  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.572885  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.581865  816637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:14:27.590179  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.598832  816637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.612313  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.621302  816637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:14:27.629649  816637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:14:27.637509  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:27.759613  816637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:14:28.272363  816637 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:14:28.272508  816637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:14:28.276406  816637 start.go:563] Will wait 60s for crictl version
	I1017 21:14:28.276512  816637 ssh_runner.go:195] Run: which crictl
	I1017 21:14:28.280107  816637 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:14:28.309200  816637 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:14:28.309357  816637 ssh_runner.go:195] Run: crio --version
	I1017 21:14:28.338914  816637 ssh_runner.go:195] Run: crio --version
	I1017 21:14:28.374319  816637 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 21:14:28.452117  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:30.950821  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:28.377417  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:14:28.394485  816637 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:14:28.399209  816637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:14:28.409850  816637 kubeadm.go:883] updating cluster {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:14:28.409962  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:28.410018  816637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:14:28.444768  816637 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:14:28.444788  816637 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:14:28.444846  816637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:14:28.476508  816637 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:14:28.476594  816637 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:14:28.476619  816637 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:14:28.476735  816637 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:14:28.476853  816637 ssh_runner.go:195] Run: crio config
	I1017 21:14:28.536506  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:28.536580  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:28.536618  816637 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:14:28.536672  816637 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629583 NodeName:embed-certs-629583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:14:28.536841  816637 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:14:28.536947  816637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:14:28.546667  816637 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:14:28.546744  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:14:28.554687  816637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 21:14:28.568713  816637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:14:28.582403  816637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 21:14:28.596171  816637 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:14:28.600128  816637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:14:28.610188  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:28.740300  816637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:14:28.757898  816637 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583 for IP: 192.168.76.2
	I1017 21:14:28.757919  816637 certs.go:195] generating shared ca certs ...
	I1017 21:14:28.757935  816637 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.758077  816637 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:14:28.758128  816637 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:14:28.758139  816637 certs.go:257] generating profile certs ...
	I1017 21:14:28.758198  816637 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key
	I1017 21:14:28.758222  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt with IP's: []
	I1017 21:14:28.943591  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt ...
	I1017 21:14:28.943625  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt: {Name:mke9b93a6d21f77b3fa085b9e90c901fba808f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.943836  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key ...
	I1017 21:14:28.943852  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key: {Name:mk73cd3aea66d051fe1d24180c40871f463e2a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.943955  816637 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a
	I1017 21:14:28.943975  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 21:14:29.142243  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a ...
	I1017 21:14:29.142275  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a: {Name:mkd4c3ea1a823ff8d2261fd7d678484e8386fd7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.142476  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a ...
	I1017 21:14:29.142493  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a: {Name:mk6db04fba64208f95a87257e5b2691bf93087e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.142591  816637 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt
	I1017 21:14:29.142671  816637 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key
	I1017 21:14:29.142754  816637 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key
	I1017 21:14:29.142772  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt with IP's: []
	I1017 21:14:29.291825  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt ...
	I1017 21:14:29.291853  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt: {Name:mk3b61f627e7a47b13f48a1d1b3d704b1bade183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.292026  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key ...
	I1017 21:14:29.292040  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key: {Name:mkab791dff42387f7eadb4f0835412f7124ac49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.292236  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:14:29.292278  816637 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:14:29.292292  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:14:29.292319  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:14:29.292346  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:14:29.292371  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:14:29.292417  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:14:29.292966  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:14:29.311737  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:14:29.330305  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:14:29.348728  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:14:29.368087  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 21:14:29.386103  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:14:29.404552  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:14:29.423428  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:14:29.441015  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:14:29.459282  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:14:29.478500  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:14:29.498227  816637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:14:29.514036  816637 ssh_runner.go:195] Run: openssl version
	I1017 21:14:29.520571  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:14:29.529114  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.533035  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.533101  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.574333  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:14:29.583226  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:14:29.591675  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.595666  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.595774  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.637181  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:14:29.646058  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:14:29.657922  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.663020  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.663144  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.705585  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:14:29.717292  816637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:14:29.721959  816637 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:14:29.722049  816637 kubeadm.go:400] StartCluster: {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:14:29.722138  816637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:14:29.722214  816637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:14:29.753839  816637 cri.go:89] found id: ""
	I1017 21:14:29.753941  816637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:14:29.761987  816637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:14:29.769731  816637 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:14:29.769820  816637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:14:29.777540  816637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:14:29.777563  816637 kubeadm.go:157] found existing configuration files:
	
	I1017 21:14:29.777615  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 21:14:29.785125  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:14:29.785197  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:14:29.792420  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 21:14:29.799964  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:14:29.800032  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:14:29.807320  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 21:14:29.815159  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:14:29.815290  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:14:29.823459  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 21:14:29.831091  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:14:29.831232  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:14:29.838795  816637 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:14:29.906051  816637 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:14:29.906347  816637 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:14:29.976818  816637 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 21:14:33.452148  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:35.948287  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:37.948826  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:39.949005  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:40.948606  813510 pod_ready.go:94] pod "coredns-66bc5c9577-zr7ck" is "Ready"
	I1017 21:14:40.948631  813510 pod_ready.go:86] duration metric: took 31.005351203s for pod "coredns-66bc5c9577-zr7ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.957181  813510 pod_ready.go:83] waiting for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.963231  813510 pod_ready.go:94] pod "etcd-no-preload-820018" is "Ready"
	I1017 21:14:40.963264  813510 pod_ready.go:86] duration metric: took 6.051686ms for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.965238  813510 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.969801  813510 pod_ready.go:94] pod "kube-apiserver-no-preload-820018" is "Ready"
	I1017 21:14:40.969866  813510 pod_ready.go:86] duration metric: took 4.564977ms for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.975763  813510 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.146370  813510 pod_ready.go:94] pod "kube-controller-manager-no-preload-820018" is "Ready"
	I1017 21:14:41.146448  813510 pod_ready.go:86] duration metric: took 170.615613ms for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.346393  813510 pod_ready.go:83] waiting for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.747046  813510 pod_ready.go:94] pod "kube-proxy-qkvkh" is "Ready"
	I1017 21:14:41.747148  813510 pod_ready.go:86] duration metric: took 400.679062ms for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.946278  813510 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:42.346456  813510 pod_ready.go:94] pod "kube-scheduler-no-preload-820018" is "Ready"
	I1017 21:14:42.346533  813510 pod_ready.go:86] duration metric: took 400.182251ms for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:42.346569  813510 pod_ready.go:40] duration metric: took 32.409346303s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:14:42.429989  813510 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:14:42.433020  813510 out.go:179] * Done! kubectl is now configured to use "no-preload-820018" cluster and "default" namespace by default
	I1017 21:14:47.985745  816637 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:14:47.985808  816637 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:14:47.985905  816637 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:14:47.985967  816637 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:14:47.986009  816637 kubeadm.go:318] OS: Linux
	I1017 21:14:47.986059  816637 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:14:47.986110  816637 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:14:47.986159  816637 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:14:47.986209  816637 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:14:47.986259  816637 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:14:47.986309  816637 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:14:47.986357  816637 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:14:47.986408  816637 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:14:47.986457  816637 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:14:47.986532  816637 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:14:47.986640  816637 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:14:47.986742  816637 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:14:47.986808  816637 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 21:14:47.989783  816637 out.go:252]   - Generating certificates and keys ...
	I1017 21:14:47.989886  816637 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:14:47.990002  816637 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 21:14:47.990093  816637 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:14:47.990160  816637 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:14:47.990229  816637 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:14:47.990286  816637 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:14:47.990347  816637 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:14:47.990496  816637 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-629583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:14:47.990556  816637 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:14:47.990687  816637 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-629583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:14:47.990765  816637 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:14:47.990834  816637 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:14:47.990885  816637 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:14:47.990948  816637 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 21:14:47.991005  816637 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:14:47.991068  816637 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:14:47.991177  816637 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:14:47.991249  816637 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:14:47.991317  816637 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:14:47.991408  816637 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:14:47.991487  816637 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:14:47.994554  816637 out.go:252]   - Booting up control plane ...
	I1017 21:14:47.994673  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:14:47.994765  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:14:47.994841  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:14:47.994961  816637 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:14:47.995064  816637 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:14:47.995254  816637 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:14:47.995359  816637 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:14:47.995492  816637 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:14:47.995640  816637 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:14:47.995758  816637 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:14:47.995824  816637 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501808887s
	I1017 21:14:47.995928  816637 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:14:47.996017  816637 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 21:14:47.996114  816637 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:14:47.996200  816637 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:14:47.996283  816637 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.422268613s
	I1017 21:14:47.996357  816637 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.676073636s
	I1017 21:14:47.996432  816637 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003060677s
	I1017 21:14:47.996558  816637 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:14:47.996693  816637 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:14:47.996772  816637 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:14:47.996980  816637 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-629583 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:14:47.997042  816637 kubeadm.go:318] [bootstrap-token] Using token: 2s20qu.3txe00jn3mfrfcxw
	I1017 21:14:48.002067  816637 out.go:252]   - Configuring RBAC rules ...
	I1017 21:14:48.002231  816637 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:14:48.002329  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:14:48.002504  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:14:48.002648  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:14:48.002783  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:14:48.002930  816637 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:14:48.003063  816637 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:14:48.003151  816637 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:14:48.003203  816637 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:14:48.003207  816637 kubeadm.go:318] 
	I1017 21:14:48.003272  816637 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:14:48.003300  816637 kubeadm.go:318] 
	I1017 21:14:48.003383  816637 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:14:48.003387  816637 kubeadm.go:318] 
	I1017 21:14:48.003423  816637 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:14:48.003486  816637 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:14:48.003540  816637 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:14:48.003544  816637 kubeadm.go:318] 
	I1017 21:14:48.003601  816637 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:14:48.003605  816637 kubeadm.go:318] 
	I1017 21:14:48.003655  816637 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:14:48.003659  816637 kubeadm.go:318] 
	I1017 21:14:48.003717  816637 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:14:48.003795  816637 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:14:48.003877  816637 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:14:48.003883  816637 kubeadm.go:318] 
	I1017 21:14:48.003972  816637 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:14:48.004053  816637 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:14:48.004058  816637 kubeadm.go:318] 
	I1017 21:14:48.004147  816637 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2s20qu.3txe00jn3mfrfcxw \
	I1017 21:14:48.004255  816637 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:14:48.004277  816637 kubeadm.go:318] 	--control-plane 
	I1017 21:14:48.004281  816637 kubeadm.go:318] 
	I1017 21:14:48.004370  816637 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:14:48.004374  816637 kubeadm.go:318] 
	I1017 21:14:48.004461  816637 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2s20qu.3txe00jn3mfrfcxw \
	I1017 21:14:48.004588  816637 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:14:48.004597  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:48.004604  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:48.011297  816637 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:14:48.014256  816637 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:14:48.019723  816637 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:14:48.019808  816637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:14:48.034868  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:14:48.404450  816637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:14:48.404595  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:48.404694  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-629583 minikube.k8s.io/updated_at=2025_10_17T21_14_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=embed-certs-629583 minikube.k8s.io/primary=true
	I1017 21:14:48.696683  816637 ops.go:34] apiserver oom_adj: -16
	I1017 21:14:48.696842  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:49.196982  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:49.696844  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:50.196862  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:50.697435  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:51.196844  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:51.696790  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:52.196972  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:52.318146  816637 kubeadm.go:1113] duration metric: took 3.913613519s to wait for elevateKubeSystemPrivileges
	I1017 21:14:52.318177  816637 kubeadm.go:402] duration metric: took 22.596132003s to StartCluster
	I1017 21:14:52.318195  816637 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:52.318251  816637 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:14:52.319774  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:52.319997  816637 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:14:52.320119  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:14:52.320364  816637 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:52.320409  816637 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:14:52.320473  816637 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629583"
	I1017 21:14:52.320488  816637 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629583"
	I1017 21:14:52.320534  816637 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:14:52.321025  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.321295  816637 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629583"
	I1017 21:14:52.321313  816637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629583"
	I1017 21:14:52.321577  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.324647  816637 out.go:179] * Verifying Kubernetes components...
	I1017 21:14:52.328003  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:52.358668  816637 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629583"
	I1017 21:14:52.358722  816637 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:14:52.359253  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.371725  816637 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:14:52.376525  816637 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:14:52.376548  816637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:14:52.376618  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:52.399719  816637 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:14:52.399738  816637 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:14:52.399798  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:52.412597  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:52.450387  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:52.709883  816637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:14:52.821657  816637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:14:52.821657  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:14:52.869633  816637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:14:53.174345  816637 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629583" to be "Ready" ...
	I1017 21:14:53.326860  816637 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 21:14:53.571080  816637 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	
	
	==> CRI-O <==
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.691971774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69bdde92-79c4-4495-a772-4e9620800249 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.693967034Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=391161d9-ed20-46b6-8924-77fc93c3b5e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.694260412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706306601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706499055Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c5cdb08c5801bcf1052031a6f4bbe832b053aad1729cd9119e5d2d877080fc0e/merged/etc/passwd: no such file or directory"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706532295Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c5cdb08c5801bcf1052031a6f4bbe832b053aad1729cd9119e5d2d877080fc0e/merged/etc/group: no such file or directory"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706817862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.741443143Z" level=info msg="Created container 488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a: kube-system/storage-provisioner/storage-provisioner" id=391161d9-ed20-46b6-8924-77fc93c3b5e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.742731973Z" level=info msg="Starting container: 488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a" id=5c4355ea-33f0-46c0-b6ec-a7370ea93019 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.747747673Z" level=info msg="Started container" PID=1628 containerID=488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a description=kube-system/storage-provisioner/storage-provisioner id=5c4355ea-33f0-46c0-b6ec-a7370ea93019 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6b921a13e667aae07fd826fcb36faa83cbcbc9b03e3af6aea0bd672097afb9f
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.201715201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.208272633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.20844555Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.208530351Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215496716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215673244Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215788806Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227349869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227527176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227617885Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235457057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235621622Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235716236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.242524699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.242685604Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	488f7a323151a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           20 seconds ago      Running             storage-provisioner         2                   d6b921a13e667       storage-provisioner                          kube-system
	e93b13df9654f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   1e467b3709ae6       dashboard-metrics-scraper-6ffb444bf9-4f7qm   kubernetes-dashboard
	e558a8931dfd9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   7341d9bedab86       kubernetes-dashboard-855c9754f9-zvlnk        kubernetes-dashboard
	5ae06373fe5ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           50 seconds ago      Running             coredns                     1                   920829aaab654       coredns-66bc5c9577-zr7ck                     kube-system
	f4a7143a5ea52       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           50 seconds ago      Running             busybox                     1                   1c2d7451881e6       busybox                                      default
	22a1b64ea91a4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           50 seconds ago      Running             kindnet-cni                 1                   737012a1cf660       kindnet-s9bz8                                kube-system
	726a53759109b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           50 seconds ago      Running             kube-proxy                  1                   1de741023da2d       kube-proxy-qkvkh                             kube-system
	2d223a0a6e05d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           50 seconds ago      Exited              storage-provisioner         1                   d6b921a13e667       storage-provisioner                          kube-system
	5ca63f4d68f94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   6a155401a6dc4       kube-scheduler-no-preload-820018             kube-system
	26cf75772e537       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   bb67faac8e0b7       etcd-no-preload-820018                       kube-system
	b53892d991d16       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   6eb72488fec2b       kube-apiserver-no-preload-820018             kube-system
	2ff478fda3874       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   45a9513611a2b       kube-controller-manager-no-preload-820018    kube-system
	
	
	==> coredns [5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 36146 "HINFO IN 4089263320317056432.8029003536391624803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012671912s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-820018
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-820018
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-820018
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:13:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820018
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-820018
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                54655725-7d36-48a4-9452-fd60671cfec5
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-zr7ck                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-no-preload-820018                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         113s
	  kube-system                 kindnet-s9bz8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-820018              250m (12%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-820018     200m (10%)    0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-qkvkh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-820018              100m (5%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4f7qm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zvlnk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 107s                 kube-proxy       
	  Normal   Starting                 48s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    113s                 kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 113s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  113s                 kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     113s                 kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   Starting                 113s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           109s                 node-controller  Node no-preload-820018 event: Registered Node no-preload-820018 in Controller
	  Normal   NodeReady                94s                  kubelet          Node no-preload-820018 status is now: NodeReady
	  Normal   Starting                 59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           46s                  node-controller  Node no-preload-820018 event: Registered Node no-preload-820018 in Controller
	
	
	==> dmesg <==
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a] <==
	{"level":"warn","ts":"2025-10-17T21:14:04.112448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.149764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.197350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.227590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.244200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.279847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.312302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.351281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.376906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.396555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.424922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.447298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.473735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.511839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.538124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.564267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.591693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.610597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.644039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.667762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.699456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.747201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.770822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.804395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.904165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52748","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:14:58 up  3:57,  0 user,  load average: 3.09, 3.64, 3.17
	Linux no-preload-820018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503] <==
	I1017 21:14:07.956365       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:14:07.956923       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:14:07.957087       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:14:07.957128       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:14:07.957168       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:14:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:14:08.200799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:14:08.200877       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:14:08.200911       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:14:08.210020       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:14:38.203402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:14:38.208970       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:14:38.209079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:14:38.209165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 21:14:39.708249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:14:39.708284       1 metrics.go:72] Registering metrics
	I1017 21:14:39.708344       1 controller.go:711] "Syncing nftables rules"
	I1017 21:14:48.201354       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:14:48.201428       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1] <==
	I1017 21:14:06.614607       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:14:06.615251       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:14:06.615676       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:14:06.615922       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 21:14:06.615984       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:14:06.637109       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:14:06.637310       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:14:06.637518       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:14:06.649346       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:14:06.650115       1 aggregator.go:171] initial CRD sync complete...
	I1017 21:14:06.713603       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 21:14:06.713635       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 21:14:06.713644       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:14:06.748041       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1017 21:14:06.891031       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:14:06.940494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:14:07.960832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:14:08.224052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:14:08.548841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:14:08.644943       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:14:09.198758       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.103.73"}
	I1017 21:14:09.236973       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.144.84"}
	I1017 21:14:11.875594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:14:12.228184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:14:12.277689       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead] <==
	I1017 21:14:11.724245       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 21:14:11.724370       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:14:11.725660       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 21:14:11.729303       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:14:11.734836       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:14:11.737726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:14:11.739196       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:14:11.743582       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 21:14:11.745967       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:14:11.749302       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:14:11.753614       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:14:11.764748       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:14:11.767026       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:14:11.767523       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 21:14:11.767892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:14:11.767942       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:14:11.767973       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:14:11.768277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:14:11.768363       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:14:11.768453       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-820018"
	I1017 21:14:11.768522       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:14:11.769086       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:14:11.791121       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:14:11.798276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:14:11.801662       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231] <==
	I1017 21:14:09.301977       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:14:09.506672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:14:09.607605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:14:09.607704       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:14:09.607813       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:14:09.629489       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:14:09.629605       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:14:09.638185       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:14:09.638705       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:14:09.638765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:14:09.645191       1 config.go:200] "Starting service config controller"
	I1017 21:14:09.645271       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:14:09.645314       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:14:09.645342       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:14:09.645380       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:14:09.645408       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:14:09.646091       1 config.go:309] "Starting node config controller"
	I1017 21:14:09.646146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:14:09.646175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:14:09.747204       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:14:09.747345       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 21:14:09.747614       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f] <==
	I1017 21:14:03.471221       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:14:09.435855       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:14:09.435889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:14:09.443350       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:14:09.443456       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:14:09.443532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:14:09.443566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:14:09.443610       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.443642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.443789       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:14:09.443857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:14:09.543804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.543953       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:14:09.544092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.197186     766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faf9c1d5-5c44-45c5-bc2f-b91224a64db1-kube-api-access-nqnzl podName:faf9c1d5-5c44-45c5-bc2f-b91224a64db1 nodeName:}" failed. No retries permitted until 2025-10-17 21:14:13.69715692 +0000 UTC m=+15.495641361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqnzl" (UniqueName: "kubernetes.io/projected/faf9c1d5-5c44-45c5-bc2f-b91224a64db1-kube-api-access-nqnzl") pod "kubernetes-dashboard-855c9754f9-zvlnk" (UID: "faf9c1d5-5c44-45c5-bc2f-b91224a64db1") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202648     766 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202703     766 projected.go:196] Error preparing data for projected volume kube-api-access-rp98q for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202777     766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba90912c-cf3f-49ef-a121-9c591dbdf3e5-kube-api-access-rp98q podName:ba90912c-cf3f-49ef-a121-9c591dbdf3e5 nodeName:}" failed. No retries permitted until 2025-10-17 21:14:13.702755578 +0000 UTC m=+15.501240019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rp98q" (UniqueName: "kubernetes.io/projected/ba90912c-cf3f-49ef-a121-9c591dbdf3e5-kube-api-access-rp98q") pod "dashboard-metrics-scraper-6ffb444bf9-4f7qm" (UID: "ba90912c-cf3f-49ef-a121-9c591dbdf3e5") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: W1017 21:14:13.868498     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443 WatchSource:0}: Error finding container 1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443: Status 404 returned error can't find the container with id 1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: W1017 21:14:13.889916     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411 WatchSource:0}: Error finding container 7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411: Status 404 returned error can't find the container with id 7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411
	Oct 17 21:14:20 no-preload-820018 kubelet[766]: I1017 21:14:20.627407     766 scope.go:117] "RemoveContainer" containerID="19185ea103412fe6cd53a14519d1f099338035f506e131f823b5cb06eeb519df"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: I1017 21:14:21.630947     766 scope.go:117] "RemoveContainer" containerID="19185ea103412fe6cd53a14519d1f099338035f506e131f823b5cb06eeb519df"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: I1017 21:14:21.631688     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: E1017 21:14:21.631836     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:22 no-preload-820018 kubelet[766]: I1017 21:14:22.634395     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:22 no-preload-820018 kubelet[766]: E1017 21:14:22.634553     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:23 no-preload-820018 kubelet[766]: I1017 21:14:23.836735     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:23 no-preload-820018 kubelet[766]: E1017 21:14:23.836901     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:34 no-preload-820018 kubelet[766]: I1017 21:14:34.396414     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:34 no-preload-820018 kubelet[766]: I1017 21:14:34.677697     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: I1017 21:14:35.681887     766 scope.go:117] "RemoveContainer" containerID="e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: E1017 21:14:35.682119     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: I1017 21:14:35.705420     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zvlnk" podStartSLOduration=12.755548744 podStartE2EDuration="24.705398086s" podCreationTimestamp="2025-10-17 21:14:11 +0000 UTC" firstStartedPulling="2025-10-17 21:14:13.894258499 +0000 UTC m=+15.692742940" lastFinishedPulling="2025-10-17 21:14:25.844107842 +0000 UTC m=+27.642592282" observedRunningTime="2025-10-17 21:14:26.670288871 +0000 UTC m=+28.468773320" watchObservedRunningTime="2025-10-17 21:14:35.705398086 +0000 UTC m=+37.503882527"
	Oct 17 21:14:37 no-preload-820018 kubelet[766]: I1017 21:14:37.689539     766 scope.go:117] "RemoveContainer" containerID="2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	Oct 17 21:14:43 no-preload-820018 kubelet[766]: I1017 21:14:43.837057     766 scope.go:117] "RemoveContainer" containerID="e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	Oct 17 21:14:43 no-preload-820018 kubelet[766]: E1017 21:14:43.837246     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:54 no-preload-820018 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:14:54 no-preload-820018 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:14:54 no-preload-820018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c] <==
	2025/10/17 21:14:25 Starting overwatch
	2025/10/17 21:14:25 Using namespace: kubernetes-dashboard
	2025/10/17 21:14:25 Using in-cluster config to connect to apiserver
	2025/10/17 21:14:25 Using secret token for csrf signing
	2025/10/17 21:14:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:14:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:14:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:14:25 Generating JWE encryption key
	2025/10/17 21:14:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:14:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:14:26 Initializing JWE encryption key from synchronized object
	2025/10/17 21:14:26 Creating in-cluster Sidecar client
	2025/10/17 21:14:26 Serving insecurely on HTTP port: 9090
	2025/10/17 21:14:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:14:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e] <==
	I1017 21:14:07.566053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:14:37.569613       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a] <==
	I1017 21:14:37.779275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:14:37.810708       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:14:37.811118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:14:37.814059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:41.268721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:45.529028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:49.126865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:52.180305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.203446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.211378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:14:55.211555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:14:55.215228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c!
	I1017 21:14:55.216906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29c5c6d-086f-48e5-9bd2-362e2a3b2aa8", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c became leader
	W1017 21:14:55.225110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.234667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:14:55.319216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c!
	W1017 21:14:57.237751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:57.244996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820018 -n no-preload-820018
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820018 -n no-preload-820018: exit status 2 (378.753739ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-820018 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-820018
helpers_test.go:243: (dbg) docker inspect no-preload-820018:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	        "Created": "2025-10-17T21:12:18.108117414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 813640,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:13:51.645292904Z",
	            "FinishedAt": "2025-10-17T21:13:50.778357092Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hostname",
	        "HostsPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/hosts",
	        "LogPath": "/var/lib/docker/containers/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589-json.log",
	        "Name": "/no-preload-820018",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-820018:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-820018",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589",
	                "LowerDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/eee07d1e84d3479afd09b1d7f44b143080820159986b754f1e3ea493eec560a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-820018",
	                "Source": "/var/lib/docker/volumes/no-preload-820018/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-820018",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-820018",
	                "name.minikube.sigs.k8s.io": "no-preload-820018",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "354ab397ea11fe050eceb0ac4f0957890990f5a57b2df22cf1836e73ff286dcd",
	            "SandboxKey": "/var/run/docker/netns/354ab397ea11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33839"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33840"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33843"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33841"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33842"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-820018": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:20:c3:50:f4:a2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5060c6ac5e7d3e19dab985b5302ecd4b006296949593ffc066761654983bbcd9",
	                    "EndpointID": "2d6e65155d23f2dec2888df2353e03cbbff95a63a20999abce77aa11f16dedbf",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-820018",
	                        "9842fccb0456"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018
E1017 21:14:59.156332  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018: exit status 2 (368.370366ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-820018 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-820018 logs -n 25: (1.715208831s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo containerd config dump                                                                                                                                                                                                  │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721          │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710 │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583     │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018      │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:14:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:14:12.482546  816637 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:14:12.482667  816637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:12.482678  816637 out.go:374] Setting ErrFile to fd 2...
	I1017 21:14:12.482682  816637 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:14:12.482939  816637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:14:12.483389  816637 out.go:368] Setting JSON to false
	I1017 21:14:12.484349  816637 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14198,"bootTime":1760721454,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:14:12.484420  816637 start.go:141] virtualization:  
	I1017 21:14:12.487979  816637 out.go:179] * [embed-certs-629583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:14:12.491065  816637 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:14:12.491156  816637 notify.go:220] Checking for updates...
	I1017 21:14:12.498295  816637 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:14:12.501146  816637 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:14:12.503992  816637 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:14:12.506840  816637 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:14:12.509667  816637 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:14:12.513125  816637 config.go:182] Loaded profile config "no-preload-820018": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:12.513233  816637 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:14:12.548048  816637 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:14:12.548212  816637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:12.608138  816637 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 21:14:12.598948406 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:12.608265  816637 docker.go:318] overlay module found
	I1017 21:14:12.611430  816637 out.go:179] * Using the docker driver based on user configuration
	I1017 21:14:12.614277  816637 start.go:305] selected driver: docker
	I1017 21:14:12.614298  816637 start.go:925] validating driver "docker" against <nil>
	I1017 21:14:12.614313  816637 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:14:12.615057  816637 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:14:12.695237  816637 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-10-17 21:14:12.683189536 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:14:12.695402  816637 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 21:14:12.695633  816637 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:14:12.698630  816637 out.go:179] * Using Docker driver with root privileges
	I1017 21:14:12.701504  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:12.701573  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:12.701585  816637 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:14:12.701670  816637 start.go:349] cluster config:
	{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:14:12.704594  816637 out.go:179] * Starting "embed-certs-629583" primary control-plane node in "embed-certs-629583" cluster
	I1017 21:14:12.707457  816637 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:14:12.710332  816637 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:14:12.713248  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:12.713301  816637 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:14:12.713314  816637 cache.go:58] Caching tarball of preloaded images
	I1017 21:14:12.713340  816637 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:14:12.713408  816637 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:14:12.713431  816637 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:14:12.713531  816637 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:14:12.713552  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json: {Name:mk6a1238dd71845769fa9266f3bd52e2343a2974 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:12.733029  816637 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:14:12.733054  816637 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:14:12.733073  816637 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:14:12.733095  816637 start.go:360] acquireMachinesLock for embed-certs-629583: {Name:mk04401a4732e984651d3d859464878000ecb8c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:14:12.733222  816637 start.go:364] duration metric: took 106.102µs to acquireMachinesLock for "embed-certs-629583"
	I1017 21:14:12.733253  816637 start.go:93] Provisioning new machine with config: &{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:14:12.733333  816637 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:14:12.001394  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:14.449862  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:12.736602  816637 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:14:12.736847  816637 start.go:159] libmachine.API.Create for "embed-certs-629583" (driver="docker")
	I1017 21:14:12.736900  816637 client.go:168] LocalClient.Create starting
	I1017 21:14:12.737000  816637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:14:12.737043  816637 main.go:141] libmachine: Decoding PEM data...
	I1017 21:14:12.737064  816637 main.go:141] libmachine: Parsing certificate...
	I1017 21:14:12.737118  816637 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:14:12.737143  816637 main.go:141] libmachine: Decoding PEM data...
	I1017 21:14:12.737156  816637 main.go:141] libmachine: Parsing certificate...
	I1017 21:14:12.737517  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:14:12.753957  816637 cli_runner.go:211] docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:14:12.754036  816637 network_create.go:284] running [docker network inspect embed-certs-629583] to gather additional debugging logs...
	I1017 21:14:12.754057  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583
	W1017 21:14:12.770736  816637 cli_runner.go:211] docker network inspect embed-certs-629583 returned with exit code 1
	I1017 21:14:12.770775  816637 network_create.go:287] error running [docker network inspect embed-certs-629583]: docker network inspect embed-certs-629583: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-629583 not found
	I1017 21:14:12.770790  816637 network_create.go:289] output of [docker network inspect embed-certs-629583]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-629583 not found
	
	** /stderr **
	I1017 21:14:12.770887  816637 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:14:12.788743  816637 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:14:12.789136  816637 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:14:12.789440  816637 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:14:12.789905  816637 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a33520}
	I1017 21:14:12.789931  816637 network_create.go:124] attempt to create docker network embed-certs-629583 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 21:14:12.789991  816637 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-629583 embed-certs-629583
	I1017 21:14:12.853748  816637 network_create.go:108] docker network embed-certs-629583 192.168.76.0/24 created
	I1017 21:14:12.853784  816637 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-629583" container
	I1017 21:14:12.853859  816637 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:14:12.869586  816637 cli_runner.go:164] Run: docker volume create embed-certs-629583 --label name.minikube.sigs.k8s.io=embed-certs-629583 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:14:12.891182  816637 oci.go:103] Successfully created a docker volume embed-certs-629583
	I1017 21:14:12.891337  816637 cli_runner.go:164] Run: docker run --rm --name embed-certs-629583-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629583 --entrypoint /usr/bin/test -v embed-certs-629583:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:14:13.467300  816637 oci.go:107] Successfully prepared a docker volume embed-certs-629583
	I1017 21:14:13.467363  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:13.467383  816637 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:14:13.467480  816637 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:14:16.452202  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:18.977967  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:19.681520  816637 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-629583:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (6.214000883s)
	I1017 21:14:19.681553  816637 kic.go:203] duration metric: took 6.214165988s to extract preloaded images to volume ...
	W1017 21:14:19.681698  816637 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:14:19.681812  816637 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:14:19.760881  816637 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-629583 --name embed-certs-629583 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-629583 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-629583 --network embed-certs-629583 --ip 192.168.76.2 --volume embed-certs-629583:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:14:20.115513  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Running}}
	I1017 21:14:20.138221  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:20.167278  816637 cli_runner.go:164] Run: docker exec embed-certs-629583 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:14:20.233055  816637 oci.go:144] the created container "embed-certs-629583" has a running status.
	I1017 21:14:20.233094  816637 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa...
	I1017 21:14:21.615329  816637 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:14:21.641646  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:21.669049  816637 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:14:21.669068  816637 kic_runner.go:114] Args: [docker exec --privileged embed-certs-629583 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:14:21.750758  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:21.779546  816637 machine.go:93] provisionDockerMachine start ...
	I1017 21:14:21.779653  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:21.810155  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:21.810498  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:21.810515  816637 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:14:21.811089  816637 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53596->127.0.0.1:33844: read: connection reset by peer
	W1017 21:14:21.449982  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:23.948947  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:25.949534  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:24.978889  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:14:24.978912  816637 ubuntu.go:182] provisioning hostname "embed-certs-629583"
	I1017 21:14:24.978974  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.012594  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.012902  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.012913  816637 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629583 && echo "embed-certs-629583" | sudo tee /etc/hostname
	I1017 21:14:25.189999  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:14:25.190090  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.213821  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.214126  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.214142  816637 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:14:25.371261  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:14:25.371295  816637 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:14:25.371321  816637 ubuntu.go:190] setting up certificates
	I1017 21:14:25.371331  816637 provision.go:84] configureAuth start
	I1017 21:14:25.371405  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:25.398563  816637 provision.go:143] copyHostCerts
	I1017 21:14:25.398628  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:14:25.398637  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:14:25.398722  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:14:25.398808  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:14:25.398814  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:14:25.398841  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:14:25.398899  816637 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:14:25.398903  816637 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:14:25.398929  816637 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:14:25.398975  816637 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629583 san=[127.0.0.1 192.168.76.2 embed-certs-629583 localhost minikube]
	I1017 21:14:25.666653  816637 provision.go:177] copyRemoteCerts
	I1017 21:14:25.666732  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:14:25.666778  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.688302  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:25.798879  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:14:25.818499  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:14:25.844255  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:14:25.875261  816637 provision.go:87] duration metric: took 503.903304ms to configureAuth
	I1017 21:14:25.875285  816637 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:14:25.875466  816637 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:25.875573  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:25.900869  816637 main.go:141] libmachine: Using SSH client type: native
	I1017 21:14:25.901181  816637 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33844 <nil> <nil>}
	I1017 21:14:25.901200  816637 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:14:26.246433  816637 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:14:26.246498  816637 machine.go:96] duration metric: took 4.466925554s to provisionDockerMachine
	I1017 21:14:26.246526  816637 client.go:171] duration metric: took 13.509613225s to LocalClient.Create
	I1017 21:14:26.246556  816637 start.go:167] duration metric: took 13.509714519s to libmachine.API.Create "embed-certs-629583"
	I1017 21:14:26.246599  816637 start.go:293] postStartSetup for "embed-certs-629583" (driver="docker")
	I1017 21:14:26.246627  816637 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:14:26.246730  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:14:26.246774  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.265883  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.371310  816637 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:14:26.374825  816637 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:14:26.374849  816637 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:14:26.374864  816637 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:14:26.374917  816637 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:14:26.374996  816637 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:14:26.375132  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:14:26.382389  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:14:26.401627  816637 start.go:296] duration metric: took 154.995689ms for postStartSetup
	I1017 21:14:26.401988  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:26.418450  816637 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:14:26.418744  816637 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:14:26.418783  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.438238  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.541895  816637 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:14:26.546617  816637 start.go:128] duration metric: took 13.813270763s to createHost
	I1017 21:14:26.546642  816637 start.go:83] releasing machines lock for "embed-certs-629583", held for 13.813407823s
	I1017 21:14:26.546750  816637 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:14:26.564665  816637 ssh_runner.go:195] Run: cat /version.json
	I1017 21:14:26.564704  816637 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:14:26.564715  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.564756  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:26.594108  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.602591  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:26.710743  816637 ssh_runner.go:195] Run: systemctl --version
	I1017 21:14:26.799353  816637 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:14:26.838176  816637 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:14:26.843491  816637 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:14:26.843595  816637 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:14:26.872078  816637 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:14:26.872098  816637 start.go:495] detecting cgroup driver to use...
	I1017 21:14:26.872130  816637 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:14:26.872179  816637 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:14:26.890480  816637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:14:26.903526  816637 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:14:26.903586  816637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:14:26.921240  816637 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:14:26.939725  816637 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:14:27.078584  816637 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:14:27.210534  816637 docker.go:234] disabling docker service ...
	I1017 21:14:27.210600  816637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:14:27.239000  816637 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:14:27.254897  816637 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:14:27.394283  816637 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:14:27.514424  816637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:14:27.529717  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:14:27.545071  816637 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:14:27.545183  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.554910  816637 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:14:27.555027  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.564058  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.572885  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.581865  816637 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:14:27.590179  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.598832  816637 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.612313  816637 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:14:27.621302  816637 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:14:27.629649  816637 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:14:27.637509  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:27.759613  816637 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:14:28.272363  816637 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:14:28.272508  816637 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:14:28.276406  816637 start.go:563] Will wait 60s for crictl version
	I1017 21:14:28.276512  816637 ssh_runner.go:195] Run: which crictl
	I1017 21:14:28.280107  816637 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:14:28.309200  816637 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:14:28.309357  816637 ssh_runner.go:195] Run: crio --version
	I1017 21:14:28.338914  816637 ssh_runner.go:195] Run: crio --version
	I1017 21:14:28.374319  816637 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 21:14:28.452117  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:30.950821  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:28.377417  816637 cli_runner.go:164] Run: docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:14:28.394485  816637 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:14:28.399209  816637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:14:28.409850  816637 kubeadm.go:883] updating cluster {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:14:28.409962  816637 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:14:28.410018  816637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:14:28.444768  816637 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:14:28.444788  816637 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:14:28.444846  816637 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:14:28.476508  816637 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:14:28.476594  816637 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:14:28.476619  816637 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:14:28.476735  816637 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:14:28.476853  816637 ssh_runner.go:195] Run: crio config
	I1017 21:14:28.536506  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:28.536580  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:28.536618  816637 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:14:28.536672  816637 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629583 NodeName:embed-certs-629583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:14:28.536841  816637 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:14:28.536947  816637 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:14:28.546667  816637 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:14:28.546744  816637 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:14:28.554687  816637 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 21:14:28.568713  816637 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:14:28.582403  816637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 21:14:28.596171  816637 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:14:28.600128  816637 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:14:28.610188  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:28.740300  816637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:14:28.757898  816637 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583 for IP: 192.168.76.2
	I1017 21:14:28.757919  816637 certs.go:195] generating shared ca certs ...
	I1017 21:14:28.757935  816637 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.758077  816637 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:14:28.758128  816637 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:14:28.758139  816637 certs.go:257] generating profile certs ...
	I1017 21:14:28.758198  816637 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key
	I1017 21:14:28.758222  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt with IP's: []
	I1017 21:14:28.943591  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt ...
	I1017 21:14:28.943625  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.crt: {Name:mke9b93a6d21f77b3fa085b9e90c901fba808f24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.943836  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key ...
	I1017 21:14:28.943852  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key: {Name:mk73cd3aea66d051fe1d24180c40871f463e2a15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:28.943955  816637 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a
	I1017 21:14:28.943975  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 21:14:29.142243  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a ...
	I1017 21:14:29.142275  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a: {Name:mkd4c3ea1a823ff8d2261fd7d678484e8386fd7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.142476  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a ...
	I1017 21:14:29.142493  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a: {Name:mk6db04fba64208f95a87257e5b2691bf93087e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.142591  816637 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt.d9e5dc6a -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt
	I1017 21:14:29.142671  816637 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key
	I1017 21:14:29.142754  816637 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key
	I1017 21:14:29.142772  816637 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt with IP's: []
	I1017 21:14:29.291825  816637 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt ...
	I1017 21:14:29.291853  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt: {Name:mk3b61f627e7a47b13f48a1d1b3d704b1bade183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.292026  816637 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key ...
	I1017 21:14:29.292040  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key: {Name:mkab791dff42387f7eadb4f0835412f7124ac49c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:29.292236  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:14:29.292278  816637 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:14:29.292292  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:14:29.292319  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:14:29.292346  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:14:29.292371  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:14:29.292417  816637 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:14:29.292966  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:14:29.311737  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:14:29.330305  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:14:29.348728  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:14:29.368087  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 21:14:29.386103  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:14:29.404552  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:14:29.423428  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:14:29.441015  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:14:29.459282  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:14:29.478500  816637 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:14:29.498227  816637 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:14:29.514036  816637 ssh_runner.go:195] Run: openssl version
	I1017 21:14:29.520571  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:14:29.529114  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.533035  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.533101  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:14:29.574333  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:14:29.583226  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:14:29.591675  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.595666  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.595774  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:14:29.637181  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:14:29.646058  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:14:29.657922  816637 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.663020  816637 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.663144  816637 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:14:29.705585  816637 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:14:29.717292  816637 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:14:29.721959  816637 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:14:29.722049  816637 kubeadm.go:400] StartCluster: {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:14:29.722138  816637 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:14:29.722214  816637 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:14:29.753839  816637 cri.go:89] found id: ""
	I1017 21:14:29.753941  816637 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:14:29.761987  816637 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:14:29.769731  816637 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:14:29.769820  816637 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:14:29.777540  816637 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:14:29.777563  816637 kubeadm.go:157] found existing configuration files:
	
	I1017 21:14:29.777615  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 21:14:29.785125  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:14:29.785197  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:14:29.792420  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 21:14:29.799964  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:14:29.800032  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:14:29.807320  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 21:14:29.815159  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:14:29.815290  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:14:29.823459  816637 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 21:14:29.831091  816637 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:14:29.831232  816637 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:14:29.838795  816637 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:14:29.906051  816637 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:14:29.906347  816637 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:14:29.976818  816637 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1017 21:14:33.452148  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:35.948287  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:37.948826  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	W1017 21:14:39.949005  813510 pod_ready.go:104] pod "coredns-66bc5c9577-zr7ck" is not "Ready", error: <nil>
	I1017 21:14:40.948606  813510 pod_ready.go:94] pod "coredns-66bc5c9577-zr7ck" is "Ready"
	I1017 21:14:40.948631  813510 pod_ready.go:86] duration metric: took 31.005351203s for pod "coredns-66bc5c9577-zr7ck" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.957181  813510 pod_ready.go:83] waiting for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.963231  813510 pod_ready.go:94] pod "etcd-no-preload-820018" is "Ready"
	I1017 21:14:40.963264  813510 pod_ready.go:86] duration metric: took 6.051686ms for pod "etcd-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.965238  813510 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.969801  813510 pod_ready.go:94] pod "kube-apiserver-no-preload-820018" is "Ready"
	I1017 21:14:40.969866  813510 pod_ready.go:86] duration metric: took 4.564977ms for pod "kube-apiserver-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:40.975763  813510 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.146370  813510 pod_ready.go:94] pod "kube-controller-manager-no-preload-820018" is "Ready"
	I1017 21:14:41.146448  813510 pod_ready.go:86] duration metric: took 170.615613ms for pod "kube-controller-manager-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.346393  813510 pod_ready.go:83] waiting for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.747046  813510 pod_ready.go:94] pod "kube-proxy-qkvkh" is "Ready"
	I1017 21:14:41.747148  813510 pod_ready.go:86] duration metric: took 400.679062ms for pod "kube-proxy-qkvkh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:41.946278  813510 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:42.346456  813510 pod_ready.go:94] pod "kube-scheduler-no-preload-820018" is "Ready"
	I1017 21:14:42.346533  813510 pod_ready.go:86] duration metric: took 400.182251ms for pod "kube-scheduler-no-preload-820018" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:14:42.346569  813510 pod_ready.go:40] duration metric: took 32.409346303s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:14:42.429989  813510 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:14:42.433020  813510 out.go:179] * Done! kubectl is now configured to use "no-preload-820018" cluster and "default" namespace by default
	I1017 21:14:47.985745  816637 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:14:47.985808  816637 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:14:47.985905  816637 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:14:47.985967  816637 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:14:47.986009  816637 kubeadm.go:318] OS: Linux
	I1017 21:14:47.986059  816637 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:14:47.986110  816637 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:14:47.986159  816637 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:14:47.986209  816637 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:14:47.986259  816637 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:14:47.986309  816637 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:14:47.986357  816637 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:14:47.986408  816637 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:14:47.986457  816637 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:14:47.986532  816637 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:14:47.986640  816637 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:14:47.986742  816637 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:14:47.986808  816637 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 21:14:47.989783  816637 out.go:252]   - Generating certificates and keys ...
	I1017 21:14:47.989886  816637 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:14:47.990002  816637 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 21:14:47.990093  816637 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:14:47.990160  816637 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:14:47.990229  816637 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:14:47.990286  816637 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:14:47.990347  816637 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:14:47.990496  816637 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [embed-certs-629583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:14:47.990556  816637 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:14:47.990687  816637 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-629583 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:14:47.990765  816637 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:14:47.990834  816637 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:14:47.990885  816637 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:14:47.990948  816637 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 21:14:47.991005  816637 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:14:47.991068  816637 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:14:47.991177  816637 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:14:47.991249  816637 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:14:47.991317  816637 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:14:47.991408  816637 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:14:47.991487  816637 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:14:47.994554  816637 out.go:252]   - Booting up control plane ...
	I1017 21:14:47.994673  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:14:47.994765  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:14:47.994841  816637 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:14:47.994961  816637 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:14:47.995064  816637 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:14:47.995254  816637 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:14:47.995359  816637 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:14:47.995492  816637 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:14:47.995640  816637 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:14:47.995758  816637 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:14:47.995824  816637 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501808887s
	I1017 21:14:47.995928  816637 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:14:47.996017  816637 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 21:14:47.996114  816637 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:14:47.996200  816637 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:14:47.996283  816637 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.422268613s
	I1017 21:14:47.996357  816637 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.676073636s
	I1017 21:14:47.996432  816637 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.003060677s
	I1017 21:14:47.996558  816637 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:14:47.996693  816637 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:14:47.996772  816637 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:14:47.996980  816637 kubeadm.go:318] [mark-control-plane] Marking the node embed-certs-629583 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:14:47.997042  816637 kubeadm.go:318] [bootstrap-token] Using token: 2s20qu.3txe00jn3mfrfcxw
	I1017 21:14:48.002067  816637 out.go:252]   - Configuring RBAC rules ...
	I1017 21:14:48.002231  816637 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:14:48.002329  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:14:48.002504  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:14:48.002648  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:14:48.002783  816637 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:14:48.002930  816637 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:14:48.003063  816637 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:14:48.003151  816637 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:14:48.003203  816637 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:14:48.003207  816637 kubeadm.go:318] 
	I1017 21:14:48.003272  816637 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:14:48.003300  816637 kubeadm.go:318] 
	I1017 21:14:48.003383  816637 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:14:48.003387  816637 kubeadm.go:318] 
	I1017 21:14:48.003423  816637 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:14:48.003486  816637 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:14:48.003540  816637 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:14:48.003544  816637 kubeadm.go:318] 
	I1017 21:14:48.003601  816637 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:14:48.003605  816637 kubeadm.go:318] 
	I1017 21:14:48.003655  816637 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:14:48.003659  816637 kubeadm.go:318] 
	I1017 21:14:48.003717  816637 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:14:48.003795  816637 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:14:48.003877  816637 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:14:48.003883  816637 kubeadm.go:318] 
	I1017 21:14:48.003972  816637 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:14:48.004053  816637 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:14:48.004058  816637 kubeadm.go:318] 
	I1017 21:14:48.004147  816637 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 2s20qu.3txe00jn3mfrfcxw \
	I1017 21:14:48.004255  816637 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:14:48.004277  816637 kubeadm.go:318] 	--control-plane 
	I1017 21:14:48.004281  816637 kubeadm.go:318] 
	I1017 21:14:48.004370  816637 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:14:48.004374  816637 kubeadm.go:318] 
	I1017 21:14:48.004461  816637 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 2s20qu.3txe00jn3mfrfcxw \
	I1017 21:14:48.004588  816637 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:14:48.004597  816637 cni.go:84] Creating CNI manager for ""
	I1017 21:14:48.004604  816637 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:14:48.011297  816637 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:14:48.014256  816637 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:14:48.019723  816637 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:14:48.019808  816637 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:14:48.034868  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:14:48.404450  816637 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:14:48.404595  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:48.404694  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-629583 minikube.k8s.io/updated_at=2025_10_17T21_14_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=embed-certs-629583 minikube.k8s.io/primary=true
	I1017 21:14:48.696683  816637 ops.go:34] apiserver oom_adj: -16
	I1017 21:14:48.696842  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:49.196982  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:49.696844  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:50.196862  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:50.697435  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:51.196844  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:51.696790  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:52.196972  816637 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:14:52.318146  816637 kubeadm.go:1113] duration metric: took 3.913613519s to wait for elevateKubeSystemPrivileges
	I1017 21:14:52.318177  816637 kubeadm.go:402] duration metric: took 22.596132003s to StartCluster
	I1017 21:14:52.318195  816637 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:52.318251  816637 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:14:52.319774  816637 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:14:52.319997  816637 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:14:52.320119  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:14:52.320364  816637 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:14:52.320409  816637 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:14:52.320473  816637 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629583"
	I1017 21:14:52.320488  816637 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629583"
	I1017 21:14:52.320534  816637 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:14:52.321025  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.321295  816637 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629583"
	I1017 21:14:52.321313  816637 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629583"
	I1017 21:14:52.321577  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.324647  816637 out.go:179] * Verifying Kubernetes components...
	I1017 21:14:52.328003  816637 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:14:52.358668  816637 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629583"
	I1017 21:14:52.358722  816637 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:14:52.359253  816637 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:14:52.371725  816637 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:14:52.376525  816637 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:14:52.376548  816637 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:14:52.376618  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:52.399719  816637 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:14:52.399738  816637 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:14:52.399798  816637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:14:52.412597  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:52.450387  816637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33844 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:14:52.709883  816637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:14:52.821657  816637 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:14:52.821657  816637 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:14:52.869633  816637 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:14:53.174345  816637 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629583" to be "Ready" ...
	I1017 21:14:53.326860  816637 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 21:14:53.571080  816637 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 21:14:53.574039  816637 addons.go:514] duration metric: took 1.253608952s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 21:14:53.830863  816637 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-629583" context rescaled to 1 replicas
	W1017 21:14:55.179401  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.691971774Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=69bdde92-79c4-4495-a772-4e9620800249 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.693967034Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=391161d9-ed20-46b6-8924-77fc93c3b5e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.694260412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706306601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706499055Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c5cdb08c5801bcf1052031a6f4bbe832b053aad1729cd9119e5d2d877080fc0e/merged/etc/passwd: no such file or directory"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706532295Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c5cdb08c5801bcf1052031a6f4bbe832b053aad1729cd9119e5d2d877080fc0e/merged/etc/group: no such file or directory"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.706817862Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.741443143Z" level=info msg="Created container 488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a: kube-system/storage-provisioner/storage-provisioner" id=391161d9-ed20-46b6-8924-77fc93c3b5e0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.742731973Z" level=info msg="Starting container: 488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a" id=5c4355ea-33f0-46c0-b6ec-a7370ea93019 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:14:37 no-preload-820018 crio[649]: time="2025-10-17T21:14:37.747747673Z" level=info msg="Started container" PID=1628 containerID=488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a description=kube-system/storage-provisioner/storage-provisioner id=5c4355ea-33f0-46c0-b6ec-a7370ea93019 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6b921a13e667aae07fd826fcb36faa83cbcbc9b03e3af6aea0bd672097afb9f
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.201715201Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.208272633Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.20844555Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.208530351Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215496716Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215673244Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.215788806Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227349869Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227527176Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.227617885Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235457057Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235621622Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.235716236Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.242524699Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:14:48 no-preload-820018 crio[649]: time="2025-10-17T21:14:48.242685604Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	488f7a323151a       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           22 seconds ago       Running             storage-provisioner         2                   d6b921a13e667       storage-provisioner                          kube-system
	e93b13df9654f       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago       Exited              dashboard-metrics-scraper   2                   1e467b3709ae6       dashboard-metrics-scraper-6ffb444bf9-4f7qm   kubernetes-dashboard
	e558a8931dfd9       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   7341d9bedab86       kubernetes-dashboard-855c9754f9-zvlnk        kubernetes-dashboard
	5ae06373fe5ed       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago       Running             coredns                     1                   920829aaab654       coredns-66bc5c9577-zr7ck                     kube-system
	f4a7143a5ea52       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago       Running             busybox                     1                   1c2d7451881e6       busybox                                      default
	22a1b64ea91a4       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago       Running             kindnet-cni                 1                   737012a1cf660       kindnet-s9bz8                                kube-system
	726a53759109b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago       Running             kube-proxy                  1                   1de741023da2d       kube-proxy-qkvkh                             kube-system
	2d223a0a6e05d       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           52 seconds ago       Exited              storage-provisioner         1                   d6b921a13e667       storage-provisioner                          kube-system
	5ca63f4d68f94       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6a155401a6dc4       kube-scheduler-no-preload-820018             kube-system
	26cf75772e537       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   bb67faac8e0b7       etcd-no-preload-820018                       kube-system
	b53892d991d16       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6eb72488fec2b       kube-apiserver-no-preload-820018             kube-system
	2ff478fda3874       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   45a9513611a2b       kube-controller-manager-no-preload-820018    kube-system
	
	
	==> coredns [5ae06373fe5ed916a3a524910626429370715733b8c4e4c677ce83108d174710] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45959 - 36146 "HINFO IN 4089263320317056432.8029003536391624803. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012671912s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-820018
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-820018
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=no-preload-820018
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_13_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:13:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-820018
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:14:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:12:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:14:37 +0000   Fri, 17 Oct 2025 21:13:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-820018
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                54655725-7d36-48a4-9452-fd60671cfec5
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-zr7ck                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-no-preload-820018                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         116s
	  kube-system                 kindnet-s9bz8                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-820018              250m (12%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-820018     200m (10%)    0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-qkvkh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-820018              100m (5%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4f7qm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-zvlnk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 109s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  2m8s (x8 over 2m8s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s (x8 over 2m8s)  kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s (x8 over 2m8s)  kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    116s                 kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 116s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s                 kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     116s                 kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   Starting                 116s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           112s                 node-controller  Node no-preload-820018 event: Registered Node no-preload-820018 in Controller
	  Normal   NodeReady                97s                  kubelet          Node no-preload-820018 status is now: NodeReady
	  Normal   Starting                 62s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)    kubelet          Node no-preload-820018 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)    kubelet          Node no-preload-820018 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)    kubelet          Node no-preload-820018 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                  node-controller  Node no-preload-820018 event: Registered Node no-preload-820018 in Controller
	
	
	==> dmesg <==
	[Oct17 20:50] overlayfs: idmapped layers are currently not supported
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [26cf75772e537da3d52b5a92de69edb24b86ee1f7bf5897c5a7dffaf91d9352a] <==
	{"level":"warn","ts":"2025-10-17T21:14:04.112448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.149764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.197350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.227590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.244200Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.279847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.312302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.351281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.376906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.396555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.424922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.447298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.473735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.511839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.538124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.564267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.591693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.610597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.644039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.667762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.699456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.747201Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.770822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.804395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:04.904165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52748","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:15:00 up  3:57,  0 user,  load average: 3.09, 3.64, 3.17
	Linux no-preload-820018 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [22a1b64ea91a45aa3ae1345c6f33154f016f57ca997addae9c1a806045dcf503] <==
	I1017 21:14:07.956365       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:14:07.956923       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:14:07.957087       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:14:07.957128       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:14:07.957168       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:14:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:14:08.200799       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:14:08.200877       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:14:08.200911       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:14:08.210020       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:14:38.203402       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:14:38.208970       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:14:38.209079       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:14:38.209165       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1017 21:14:39.708249       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:14:39.708284       1 metrics.go:72] Registering metrics
	I1017 21:14:39.708344       1 controller.go:711] "Syncing nftables rules"
	I1017 21:14:48.201354       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:14:48.201428       1 main.go:301] handling current node
	I1017 21:14:58.207177       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:14:58.207214       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b53892d991d163dcd2cdb53a2b71a969b1f180ec4e90d8c76aa7c88d90a815b1] <==
	I1017 21:14:06.614607       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:14:06.615251       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:14:06.615676       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:14:06.615922       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 21:14:06.615984       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:14:06.637109       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:14:06.637310       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:14:06.637518       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:14:06.649346       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:14:06.650115       1 aggregator.go:171] initial CRD sync complete...
	I1017 21:14:06.713603       1 autoregister_controller.go:144] Starting autoregister controller
	I1017 21:14:06.713635       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1017 21:14:06.713644       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:14:06.748041       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	E1017 21:14:06.891031       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:14:06.940494       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:14:07.960832       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:14:08.224052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:14:08.548841       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:14:08.644943       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:14:09.198758       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.103.73"}
	I1017 21:14:09.236973       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.144.84"}
	I1017 21:14:11.875594       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:14:12.228184       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:14:12.277689       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [2ff478fda3874de9462917476936e6c82f26f10ca91805040f58d4a1b34a2ead] <==
	I1017 21:14:11.724245       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 21:14:11.724370       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:14:11.725660       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 21:14:11.729303       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:14:11.734836       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:14:11.737726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:14:11.739196       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:14:11.743582       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 21:14:11.745967       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:14:11.749302       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:14:11.753614       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:14:11.764748       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:14:11.767026       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:14:11.767523       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 21:14:11.767892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:14:11.767942       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:14:11.767973       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:14:11.768277       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:14:11.768363       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:14:11.768453       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-820018"
	I1017 21:14:11.768522       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:14:11.769086       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:14:11.791121       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:14:11.798276       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:14:11.801662       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	
	
	==> kube-proxy [726a53759109b479f57b5181c8581a169d71bd0921f3da23034cb833cb6fb231] <==
	I1017 21:14:09.301977       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:14:09.506672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:14:09.607605       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:14:09.607704       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:14:09.607813       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:14:09.629489       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:14:09.629605       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:14:09.638185       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:14:09.638705       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:14:09.638765       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:14:09.645191       1 config.go:200] "Starting service config controller"
	I1017 21:14:09.645271       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:14:09.645314       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:14:09.645342       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:14:09.645380       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:14:09.645408       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:14:09.646091       1 config.go:309] "Starting node config controller"
	I1017 21:14:09.646146       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:14:09.646175       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:14:09.747204       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:14:09.747345       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1017 21:14:09.747614       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ca63f4d68f94148700c6af8d28bb5de973925a05e054d813012caf12d1be18f] <==
	I1017 21:14:03.471221       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:14:09.435855       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:14:09.435889       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:14:09.443350       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:14:09.443456       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:14:09.443532       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:14:09.443566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:14:09.443610       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.443642       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.443789       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:14:09.443857       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:14:09.543804       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:14:09.543953       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:14:09.544092       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.197186     766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/faf9c1d5-5c44-45c5-bc2f-b91224a64db1-kube-api-access-nqnzl podName:faf9c1d5-5c44-45c5-bc2f-b91224a64db1 nodeName:}" failed. No retries permitted until 2025-10-17 21:14:13.69715692 +0000 UTC m=+15.495641361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nqnzl" (UniqueName: "kubernetes.io/projected/faf9c1d5-5c44-45c5-bc2f-b91224a64db1-kube-api-access-nqnzl") pod "kubernetes-dashboard-855c9754f9-zvlnk" (UID: "faf9c1d5-5c44-45c5-bc2f-b91224a64db1") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202648     766 projected.go:291] Couldn't get configMap kubernetes-dashboard/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202703     766 projected.go:196] Error preparing data for projected volume kube-api-access-rp98q for pod kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm: failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: E1017 21:14:13.202777     766 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ba90912c-cf3f-49ef-a121-9c591dbdf3e5-kube-api-access-rp98q podName:ba90912c-cf3f-49ef-a121-9c591dbdf3e5 nodeName:}" failed. No retries permitted until 2025-10-17 21:14:13.702755578 +0000 UTC m=+15.501240019 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-rp98q" (UniqueName: "kubernetes.io/projected/ba90912c-cf3f-49ef-a121-9c591dbdf3e5-kube-api-access-rp98q") pod "dashboard-metrics-scraper-6ffb444bf9-4f7qm" (UID: "ba90912c-cf3f-49ef-a121-9c591dbdf3e5") : failed to sync configmap cache: timed out waiting for the condition
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: W1017 21:14:13.868498     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443 WatchSource:0}: Error finding container 1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443: Status 404 returned error can't find the container with id 1e467b3709ae6b3f2e52823cca3495fbdbfdaa09da74846dc422b059eeb22443
	Oct 17 21:14:13 no-preload-820018 kubelet[766]: W1017 21:14:13.889916     766 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/9842fccb04568a360861ef224479e5ace05e6c588545cae10145e6892796e589/crio-7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411 WatchSource:0}: Error finding container 7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411: Status 404 returned error can't find the container with id 7341d9bedab86f217cbafcd520bba1817763d9e7348f588f8298b43443854411
	Oct 17 21:14:20 no-preload-820018 kubelet[766]: I1017 21:14:20.627407     766 scope.go:117] "RemoveContainer" containerID="19185ea103412fe6cd53a14519d1f099338035f506e131f823b5cb06eeb519df"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: I1017 21:14:21.630947     766 scope.go:117] "RemoveContainer" containerID="19185ea103412fe6cd53a14519d1f099338035f506e131f823b5cb06eeb519df"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: I1017 21:14:21.631688     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:21 no-preload-820018 kubelet[766]: E1017 21:14:21.631836     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:22 no-preload-820018 kubelet[766]: I1017 21:14:22.634395     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:22 no-preload-820018 kubelet[766]: E1017 21:14:22.634553     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:23 no-preload-820018 kubelet[766]: I1017 21:14:23.836735     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:23 no-preload-820018 kubelet[766]: E1017 21:14:23.836901     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:34 no-preload-820018 kubelet[766]: I1017 21:14:34.396414     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:34 no-preload-820018 kubelet[766]: I1017 21:14:34.677697     766 scope.go:117] "RemoveContainer" containerID="3bc16229708b8c2c82c7ccb1c1ba0a4026eeccef3214fd4cf42a723ec435e4e1"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: I1017 21:14:35.681887     766 scope.go:117] "RemoveContainer" containerID="e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: E1017 21:14:35.682119     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:35 no-preload-820018 kubelet[766]: I1017 21:14:35.705420     766 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-zvlnk" podStartSLOduration=12.755548744 podStartE2EDuration="24.705398086s" podCreationTimestamp="2025-10-17 21:14:11 +0000 UTC" firstStartedPulling="2025-10-17 21:14:13.894258499 +0000 UTC m=+15.692742940" lastFinishedPulling="2025-10-17 21:14:25.844107842 +0000 UTC m=+27.642592282" observedRunningTime="2025-10-17 21:14:26.670288871 +0000 UTC m=+28.468773320" watchObservedRunningTime="2025-10-17 21:14:35.705398086 +0000 UTC m=+37.503882527"
	Oct 17 21:14:37 no-preload-820018 kubelet[766]: I1017 21:14:37.689539     766 scope.go:117] "RemoveContainer" containerID="2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e"
	Oct 17 21:14:43 no-preload-820018 kubelet[766]: I1017 21:14:43.837057     766 scope.go:117] "RemoveContainer" containerID="e93b13df9654f73f8bb39d07590233417eeac3aa2913e68a6e9aa94faf1e9581"
	Oct 17 21:14:43 no-preload-820018 kubelet[766]: E1017 21:14:43.837246     766 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4f7qm_kubernetes-dashboard(ba90912c-cf3f-49ef-a121-9c591dbdf3e5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4f7qm" podUID="ba90912c-cf3f-49ef-a121-9c591dbdf3e5"
	Oct 17 21:14:54 no-preload-820018 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:14:54 no-preload-820018 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:14:54 no-preload-820018 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [e558a8931dfd98848c8e8e5a50f3a79349b31be93cb790711e9c8aa95d13f04c] <==
	2025/10/17 21:14:25 Using namespace: kubernetes-dashboard
	2025/10/17 21:14:25 Using in-cluster config to connect to apiserver
	2025/10/17 21:14:25 Using secret token for csrf signing
	2025/10/17 21:14:25 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:14:25 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:14:25 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:14:25 Generating JWE encryption key
	2025/10/17 21:14:25 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:14:25 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:14:26 Initializing JWE encryption key from synchronized object
	2025/10/17 21:14:26 Creating in-cluster Sidecar client
	2025/10/17 21:14:26 Serving insecurely on HTTP port: 9090
	2025/10/17 21:14:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:14:56 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:14:25 Starting overwatch
	
	
	==> storage-provisioner [2d223a0a6e05d617bce6f8cd383a3021c0e8b44df1154451061eeb516c61739e] <==
	I1017 21:14:07.566053       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:14:37.569613       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [488f7a323151a55e17036fa436fb15d1d6bf588716aa453e6e377d21c434237a] <==
	I1017 21:14:37.779275       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:14:37.810708       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:14:37.811118       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:14:37.814059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:41.268721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:45.529028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:49.126865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:52.180305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.203446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.211378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:14:55.211555       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:14:55.215228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c!
	I1017 21:14:55.216906       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a29c5c6d-086f-48e5-9bd2-362e2a3b2aa8", APIVersion:"v1", ResourceVersion:"685", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c became leader
	W1017 21:14:55.225110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:55.234667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:14:55.319216       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-820018_94eaa766-0547-487b-ab8a-bd169f8ec26c!
	W1017 21:14:57.237751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:57.244996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:59.248683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:14:59.262390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820018 -n no-preload-820018
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-820018 -n no-preload-820018: exit status 2 (403.056204ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-820018 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (268.313504ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:15:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-629583 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-629583 describe deploy/metrics-server -n kube-system: exit status 1 (93.34789ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-629583 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629583
helpers_test.go:243: (dbg) docker inspect embed-certs-629583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	        "Created": "2025-10-17T21:14:19.780499873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 817222,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:14:19.850478496Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hosts",
	        "LogPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa-json.log",
	        "Name": "/embed-certs-629583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	                "LowerDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629583",
	                "Source": "/var/lib/docker/volumes/embed-certs-629583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629583",
	                "name.minikube.sigs.k8s.io": "embed-certs-629583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "09182acf6cd873be0b4f0b337e9893aaf93cbae0a65c8586d288b7ddec5a1d25",
	            "SandboxKey": "/var/run/docker/netns/09182acf6cd8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33844"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33845"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33848"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33846"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33847"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:a5:45:c1:fa:31",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cf73a7bb458977ed299c7ce9cbca11369f8601f7b17d9b0ba6519ff0a5d4f48",
	                    "EndpointID": "eb08e20b2fd24d937358d04daa68d3a2adedda995a6aa1bc79bf7173fd685a4d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629583",
	                        "792e6eed90d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25: (1.260633663s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-667721 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-667721                │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-667721                │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ ssh     │ -p bridge-667721 sudo crio config                                                                                                                                                                                                             │ bridge-667721                │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ delete  │ -p bridge-667721                                                                                                                                                                                                                              │ bridge-667721                │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:15:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:15:04.828281  820922 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:15:04.828432  820922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:15:04.828443  820922 out.go:374] Setting ErrFile to fd 2...
	I1017 21:15:04.828448  820922 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:15:04.828755  820922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:15:04.829211  820922 out.go:368] Setting JSON to false
	I1017 21:15:04.830383  820922 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14251,"bootTime":1760721454,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:15:04.830462  820922 start.go:141] virtualization:  
	I1017 21:15:04.835058  820922 out.go:179] * [default-k8s-diff-port-332023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:15:04.837791  820922 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:15:04.837839  820922 notify.go:220] Checking for updates...
	I1017 21:15:04.843301  820922 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:15:04.845941  820922 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:15:04.848486  820922 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:15:04.851181  820922 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:15:04.853935  820922 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:15:04.857188  820922 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:15:04.857334  820922 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:15:04.882345  820922 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:15:04.882479  820922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:15:04.942507  820922 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:15:04.932793102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:15:04.942617  820922 docker.go:318] overlay module found
	I1017 21:15:04.945694  820922 out.go:179] * Using the docker driver based on user configuration
	I1017 21:15:04.948516  820922 start.go:305] selected driver: docker
	I1017 21:15:04.948541  820922 start.go:925] validating driver "docker" against <nil>
	I1017 21:15:04.948556  820922 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:15:04.949323  820922 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:15:05.015337  820922 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:15:05.003086864 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:15:05.015516  820922 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 21:15:05.015829  820922 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:15:05.018793  820922 out.go:179] * Using Docker driver with root privileges
	I1017 21:15:05.022998  820922 cni.go:84] Creating CNI manager for ""
	I1017 21:15:05.023218  820922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:15:05.023294  820922 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:15:05.023388  820922 start.go:349] cluster config:
	{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:15:05.026512  820922 out.go:179] * Starting "default-k8s-diff-port-332023" primary control-plane node in "default-k8s-diff-port-332023" cluster
	I1017 21:15:05.029471  820922 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:15:05.032444  820922 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:15:05.035231  820922 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:15:05.035300  820922 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:15:05.035314  820922 cache.go:58] Caching tarball of preloaded images
	I1017 21:15:05.035324  820922 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:15:05.035474  820922 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:15:05.035486  820922 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:15:05.035601  820922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:15:05.035634  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json: {Name:mkfd2fda618428fa4bd7cb9f56026e827de56145 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:05.055331  820922 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:15:05.055367  820922 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:15:05.055418  820922 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:15:05.055451  820922 start.go:360] acquireMachinesLock for default-k8s-diff-port-332023: {Name:mkd5f10687dc08061f4c474fbb408a2c8ae57413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:15:05.055579  820922 start.go:364] duration metric: took 105.552µs to acquireMachinesLock for "default-k8s-diff-port-332023"
	I1017 21:15:05.055612  820922 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:15:05.055688  820922 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:15:04.677705  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:06.678089  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:05.059292  820922 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:15:05.059557  820922 start.go:159] libmachine.API.Create for "default-k8s-diff-port-332023" (driver="docker")
	I1017 21:15:05.059624  820922 client.go:168] LocalClient.Create starting
	I1017 21:15:05.059721  820922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:15:05.059761  820922 main.go:141] libmachine: Decoding PEM data...
	I1017 21:15:05.059783  820922 main.go:141] libmachine: Parsing certificate...
	I1017 21:15:05.059845  820922 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:15:05.059870  820922 main.go:141] libmachine: Decoding PEM data...
	I1017 21:15:05.059881  820922 main.go:141] libmachine: Parsing certificate...
	I1017 21:15:05.060298  820922 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-332023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:15:05.076003  820922 cli_runner.go:211] docker network inspect default-k8s-diff-port-332023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:15:05.076093  820922 network_create.go:284] running [docker network inspect default-k8s-diff-port-332023] to gather additional debugging logs...
	I1017 21:15:05.076119  820922 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-332023
	W1017 21:15:05.094690  820922 cli_runner.go:211] docker network inspect default-k8s-diff-port-332023 returned with exit code 1
	I1017 21:15:05.094734  820922 network_create.go:287] error running [docker network inspect default-k8s-diff-port-332023]: docker network inspect default-k8s-diff-port-332023: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-332023 not found
	I1017 21:15:05.094748  820922 network_create.go:289] output of [docker network inspect default-k8s-diff-port-332023]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-332023 not found
	
	** /stderr **
	I1017 21:15:05.094862  820922 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:15:05.113105  820922 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:15:05.113512  820922 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:15:05.113772  820922 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:15:05.114081  820922 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9cf73a7bb458 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:8e:d7:d8:ab:0c:3c} reservation:<nil>}
	I1017 21:15:05.114538  820922 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a36ca0}
	I1017 21:15:05.114562  820922 network_create.go:124] attempt to create docker network default-k8s-diff-port-332023 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1017 21:15:05.114630  820922 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-332023 default-k8s-diff-port-332023
	I1017 21:15:05.180284  820922 network_create.go:108] docker network default-k8s-diff-port-332023 192.168.85.0/24 created
	I1017 21:15:05.180315  820922 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-332023" container
	I1017 21:15:05.180402  820922 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:15:05.196686  820922 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-332023 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-332023 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:15:05.216055  820922 oci.go:103] Successfully created a docker volume default-k8s-diff-port-332023
	I1017 21:15:05.216148  820922 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-332023-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-332023 --entrypoint /usr/bin/test -v default-k8s-diff-port-332023:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:15:05.782305  820922 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-332023
	I1017 21:15:05.782348  820922 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:15:05.782369  820922 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:15:05.782448  820922 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-332023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:15:09.177638  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:11.179652  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:10.244157  820922 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-332023:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.461666856s)
	I1017 21:15:10.244191  820922 kic.go:203] duration metric: took 4.461818062s to extract preloaded images to volume ...
	W1017 21:15:10.244343  820922 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:15:10.244463  820922 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:15:10.300738  820922 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-332023 --name default-k8s-diff-port-332023 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-332023 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-332023 --network default-k8s-diff-port-332023 --ip 192.168.85.2 --volume default-k8s-diff-port-332023:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:15:10.625347  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Running}}
	I1017 21:15:10.645115  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:10.672655  820922 cli_runner.go:164] Run: docker exec default-k8s-diff-port-332023 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:15:10.734205  820922 oci.go:144] the created container "default-k8s-diff-port-332023" has a running status.
	I1017 21:15:10.734249  820922 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa...
	I1017 21:15:11.132307  820922 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:15:11.157849  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:11.182711  820922 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:15:11.182734  820922 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-332023 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:15:11.222903  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:11.242109  820922 machine.go:93] provisionDockerMachine start ...
	I1017 21:15:11.242212  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:11.260651  820922 main.go:141] libmachine: Using SSH client type: native
	I1017 21:15:11.261033  820922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1017 21:15:11.261049  820922 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:15:11.261689  820922 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:15:14.418920  820922 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:15:14.418947  820922 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-332023"
	I1017 21:15:14.419034  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:14.436864  820922 main.go:141] libmachine: Using SSH client type: native
	I1017 21:15:14.437185  820922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1017 21:15:14.437202  820922 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-332023 && echo "default-k8s-diff-port-332023" | sudo tee /etc/hostname
	I1017 21:15:14.598635  820922 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:15:14.598732  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:14.615892  820922 main.go:141] libmachine: Using SSH client type: native
	I1017 21:15:14.616197  820922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1017 21:15:14.616247  820922 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-332023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-332023/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-332023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:15:14.767398  820922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:15:14.767422  820922 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:15:14.767440  820922 ubuntu.go:190] setting up certificates
	I1017 21:15:14.767449  820922 provision.go:84] configureAuth start
	I1017 21:15:14.767515  820922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:15:14.788904  820922 provision.go:143] copyHostCerts
	I1017 21:15:14.788964  820922 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:15:14.788973  820922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:15:14.789052  820922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:15:14.789144  820922 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:15:14.789149  820922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:15:14.789174  820922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:15:14.789222  820922 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:15:14.789229  820922 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:15:14.789253  820922 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:15:14.789301  820922 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-332023 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-332023 localhost minikube]
	I1017 21:15:15.024755  820922 provision.go:177] copyRemoteCerts
	I1017 21:15:15.024836  820922 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:15:15.024884  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.050642  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:15.163610  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 21:15:15.184778  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:15:15.203036  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:15:15.222063  820922 provision.go:87] duration metric: took 454.599208ms to configureAuth
	I1017 21:15:15.222089  820922 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:15:15.222271  820922 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:15:15.222380  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.239644  820922 main.go:141] libmachine: Using SSH client type: native
	I1017 21:15:15.239958  820922 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33849 <nil> <nil>}
	I1017 21:15:15.239986  820922 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:15:15.522920  820922 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:15:15.522963  820922 machine.go:96] duration metric: took 4.280825758s to provisionDockerMachine
	I1017 21:15:15.522974  820922 client.go:171] duration metric: took 10.463338766s to LocalClient.Create
	I1017 21:15:15.522990  820922 start.go:167] duration metric: took 10.463438944s to libmachine.API.Create "default-k8s-diff-port-332023"
	I1017 21:15:15.523000  820922 start.go:293] postStartSetup for "default-k8s-diff-port-332023" (driver="docker")
	I1017 21:15:15.523012  820922 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:15:15.523147  820922 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:15:15.523209  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.541058  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:15.647120  820922 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:15:15.650496  820922 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:15:15.650529  820922 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:15:15.650541  820922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:15:15.650599  820922 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:15:15.650690  820922 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:15:15.650817  820922 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:15:15.658397  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:15:15.678218  820922 start.go:296] duration metric: took 155.202872ms for postStartSetup
	I1017 21:15:15.678583  820922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:15:15.696247  820922 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:15:15.696559  820922 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:15:15.696612  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.713630  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:15.816364  820922 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:15:15.820852  820922 start.go:128] duration metric: took 10.765149569s to createHost
	I1017 21:15:15.820873  820922 start.go:83] releasing machines lock for "default-k8s-diff-port-332023", held for 10.765282264s
	I1017 21:15:15.820939  820922 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:15:15.841061  820922 ssh_runner.go:195] Run: cat /version.json
	I1017 21:15:15.841077  820922 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:15:15.841115  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.841138  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:15.861984  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:15.876566  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:16.058127  820922 ssh_runner.go:195] Run: systemctl --version
	I1017 21:15:16.064978  820922 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:15:16.104695  820922 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:15:16.109084  820922 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:15:16.109220  820922 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:15:16.143381  820922 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:15:16.143406  820922 start.go:495] detecting cgroup driver to use...
	I1017 21:15:16.143441  820922 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:15:16.143491  820922 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:15:16.162109  820922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:15:16.175569  820922 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:15:16.175653  820922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:15:16.198981  820922 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:15:16.220643  820922 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:15:16.347874  820922 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:15:16.468975  820922 docker.go:234] disabling docker service ...
	I1017 21:15:16.469042  820922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:15:16.490198  820922 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:15:16.503800  820922 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:15:16.636369  820922 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:15:16.772967  820922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:15:16.786551  820922 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:15:16.801805  820922 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:15:16.801920  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.811958  820922 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:15:16.812068  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.821155  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.830209  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.839720  820922 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:15:16.848434  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.857472  820922 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.870809  820922 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:15:16.880074  820922 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:15:16.887504  820922 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:15:16.894657  820922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:15:17.019422  820922 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:15:17.147645  820922 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:15:17.147713  820922 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:15:17.151482  820922 start.go:563] Will wait 60s for crictl version
	I1017 21:15:17.151546  820922 ssh_runner.go:195] Run: which crictl
	I1017 21:15:17.154864  820922 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:15:17.180270  820922 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:15:17.180364  820922 ssh_runner.go:195] Run: crio --version
	I1017 21:15:17.209674  820922 ssh_runner.go:195] Run: crio --version
	I1017 21:15:17.243314  820922 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1017 21:15:13.678357  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:16.178954  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:17.246135  820922 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-332023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:15:17.266126  820922 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:15:17.270167  820922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:15:17.280486  820922 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:15:17.280605  820922 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:15:17.280669  820922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:15:17.325809  820922 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:15:17.325836  820922 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:15:17.325892  820922 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:15:17.353235  820922 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:15:17.353261  820922 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:15:17.353270  820922 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 21:15:17.353351  820922 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-332023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:15:17.353437  820922 ssh_runner.go:195] Run: crio config
	I1017 21:15:17.426381  820922 cni.go:84] Creating CNI manager for ""
	I1017 21:15:17.426402  820922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:15:17.426420  820922 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:15:17.426454  820922 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-332023 NodeName:default-k8s-diff-port-332023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:15:17.426590  820922 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-332023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:15:17.426683  820922 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:15:17.434520  820922 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:15:17.434592  820922 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:15:17.442223  820922 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 21:15:17.454829  820922 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:15:17.467536  820922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 21:15:17.480081  820922 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:15:17.483711  820922 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:15:17.493336  820922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:15:17.621136  820922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:15:17.639304  820922 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023 for IP: 192.168.85.2
	I1017 21:15:17.639327  820922 certs.go:195] generating shared ca certs ...
	I1017 21:15:17.639345  820922 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:17.639484  820922 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:15:17.639530  820922 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:15:17.639542  820922 certs.go:257] generating profile certs ...
	I1017 21:15:17.639602  820922 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.key
	I1017 21:15:17.639631  820922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.crt with IP's: []
	I1017 21:15:17.995860  820922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.crt ...
	I1017 21:15:17.995893  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.crt: {Name:mka143947621642f1f80bde340a2751cbc3aace1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:17.996100  820922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.key ...
	I1017 21:15:17.996118  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.key: {Name:mk573f3826245d9c0c72114f0aa6796fc2b13d55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:17.996214  820922 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414
	I1017 21:15:17.996233  820922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt.a4419414 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1017 21:15:18.456545  820922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt.a4419414 ...
	I1017 21:15:18.456584  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt.a4419414: {Name:mk196bcce3a91518c0f62a896c507918a098e4e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:18.456794  820922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414 ...
	I1017 21:15:18.456811  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414: {Name:mk6cbdd715435d66f2b51a9f5ba2af619e15432f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:18.456909  820922 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt.a4419414 -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt
	I1017 21:15:18.456999  820922 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414 -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key
	I1017 21:15:18.457064  820922 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key
	I1017 21:15:18.457087  820922 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt with IP's: []
	I1017 21:15:20.201767  820922 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt ...
	I1017 21:15:20.201808  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt: {Name:mkf8e3fe68807b31b5f8021f8ea30af3adabb3f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:20.202021  820922 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key ...
	I1017 21:15:20.202037  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key: {Name:mk46a952068ced25af8f995653e873754c70eb69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:20.202236  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:15:20.202279  820922 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:15:20.202301  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:15:20.202329  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:15:20.202356  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:15:20.202382  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:15:20.202431  820922 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:15:20.203015  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:15:20.225886  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:15:20.248445  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:15:20.266756  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:15:20.286218  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 21:15:20.306580  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:15:20.328020  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:15:20.348296  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:15:20.366829  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:15:20.386087  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:15:20.405520  820922 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:15:20.423805  820922 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:15:20.436714  820922 ssh_runner.go:195] Run: openssl version
	I1017 21:15:20.442912  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:15:20.451261  820922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:15:20.454863  820922 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:15:20.454965  820922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:15:20.495768  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:15:20.505490  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:15:20.514040  820922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:15:20.518170  820922 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:15:20.518234  820922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:15:20.559084  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:15:20.567491  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:15:20.575626  820922 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:15:20.579515  820922 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:15:20.579613  820922 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:15:20.620729  820922 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:15:20.629309  820922 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:15:20.632835  820922 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:15:20.632890  820922 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:15:20.632970  820922 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:15:20.633029  820922 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:15:20.666230  820922 cri.go:89] found id: ""
	I1017 21:15:20.666325  820922 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:15:20.682278  820922 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:15:20.691648  820922 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:15:20.691727  820922 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:15:20.703777  820922 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:15:20.703796  820922 kubeadm.go:157] found existing configuration files:
	
	I1017 21:15:20.703848  820922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1017 21:15:20.713572  820922 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:15:20.713636  820922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:15:20.723789  820922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1017 21:15:20.731778  820922 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:15:20.731855  820922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:15:20.739186  820922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1017 21:15:20.747575  820922 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:15:20.747710  820922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:15:20.755265  820922 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1017 21:15:20.763381  820922 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:15:20.763477  820922 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:15:20.771311  820922 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:15:20.813524  820922 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:15:20.813662  820922 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:15:20.839022  820922 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:15:20.839175  820922 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:15:20.839234  820922 kubeadm.go:318] OS: Linux
	I1017 21:15:20.839315  820922 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:15:20.839390  820922 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:15:20.839464  820922 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:15:20.839536  820922 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:15:20.839623  820922 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:15:20.839696  820922 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:15:20.839769  820922 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:15:20.839840  820922 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:15:20.839933  820922 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:15:20.911646  820922 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:15:20.911845  820922 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:15:20.911983  820922 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:15:20.920756  820922 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1017 21:15:18.678796  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:21.180175  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:20.925000  820922 out.go:252]   - Generating certificates and keys ...
	I1017 21:15:20.925213  820922 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:15:20.925340  820922 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1017 21:15:21.227738  820922 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:15:22.346673  820922 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:15:22.523686  820922 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:15:23.000008  820922 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:15:23.563560  820922 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:15:23.563972  820922 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-332023 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1017 21:15:23.864135  820922 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:15:23.864432  820922 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-332023 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	W1017 21:15:23.683638  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:26.178657  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:25.249424  820922 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:15:25.367994  820922 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:15:25.495728  820922 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:15:25.496002  820922 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1017 21:15:25.901784  820922 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:15:26.833761  820922 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:15:27.197806  820922 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:15:27.748970  820922 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:15:28.387902  820922 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:15:28.388503  820922 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:15:28.391122  820922 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:15:28.394542  820922 out.go:252]   - Booting up control plane ...
	I1017 21:15:28.394654  820922 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:15:28.394745  820922 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:15:28.396724  820922 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:15:28.412777  820922 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:15:28.413135  820922 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:15:28.421292  820922 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:15:28.421832  820922 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:15:28.421906  820922 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:15:28.559696  820922 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:15:28.559845  820922 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1017 21:15:28.180185  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	W1017 21:15:30.677577  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:30.066332  820922 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.507897986s
	I1017 21:15:30.070505  820922 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:15:30.070981  820922 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1017 21:15:30.071525  820922 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:15:30.072561  820922 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:15:33.310844  820922 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.237753613s
	I1017 21:15:35.230895  820922 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 5.157436434s
	I1017 21:15:37.073402  820922 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 7.001415055s
	I1017 21:15:37.100943  820922 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:15:37.115042  820922 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:15:37.134039  820922 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:15:37.134606  820922 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-332023 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:15:37.148182  820922 kubeadm.go:318] [bootstrap-token] Using token: 0padp0.68tb9hgkgv69cwno
	W1017 21:15:32.678104  816637 node_ready.go:57] node "embed-certs-629583" has "Ready":"False" status (will retry)
	I1017 21:15:33.680300  816637 node_ready.go:49] node "embed-certs-629583" is "Ready"
	I1017 21:15:33.680333  816637 node_ready.go:38] duration metric: took 40.505954165s for node "embed-certs-629583" to be "Ready" ...
	I1017 21:15:33.680346  816637 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:15:33.680401  816637 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:15:33.724526  816637 api_server.go:72] duration metric: took 41.404496335s to wait for apiserver process to appear ...
	I1017 21:15:33.724573  816637 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:15:33.724593  816637 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:15:33.744538  816637 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:15:33.746416  816637 api_server.go:141] control plane version: v1.34.1
	I1017 21:15:33.746458  816637 api_server.go:131] duration metric: took 21.87684ms to wait for apiserver health ...
	I1017 21:15:33.746468  816637 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:15:33.757267  816637 system_pods.go:59] 8 kube-system pods found
	I1017 21:15:33.757311  816637 system_pods.go:61] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:33.757344  816637 system_pods.go:61] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:33.757358  816637 system_pods.go:61] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:33.757363  816637 system_pods.go:61] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:33.757369  816637 system_pods.go:61] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:33.757373  816637 system_pods.go:61] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:33.757384  816637 system_pods.go:61] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:33.757399  816637 system_pods.go:61] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:15:33.757421  816637 system_pods.go:74] duration metric: took 10.932344ms to wait for pod list to return data ...
	I1017 21:15:33.757437  816637 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:15:33.762856  816637 default_sa.go:45] found service account: "default"
	I1017 21:15:33.762884  816637 default_sa.go:55] duration metric: took 5.44019ms for default service account to be created ...
	I1017 21:15:33.762904  816637 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:15:33.778802  816637 system_pods.go:86] 8 kube-system pods found
	I1017 21:15:33.778848  816637 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:33.778855  816637 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:33.778881  816637 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:33.778892  816637 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:33.778903  816637 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:33.778928  816637 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:33.778938  816637 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:33.778945  816637 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:15:33.778965  816637 retry.go:31] will retry after 203.810919ms: missing components: kube-dns
	I1017 21:15:33.989757  816637 system_pods.go:86] 8 kube-system pods found
	I1017 21:15:33.989805  816637 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:33.989832  816637 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:33.989840  816637 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:33.989846  816637 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:33.989852  816637 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:33.989891  816637 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:33.989904  816637 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:33.989910  816637 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:15:33.989938  816637 retry.go:31] will retry after 252.885795ms: missing components: kube-dns
	I1017 21:15:34.247587  816637 system_pods.go:86] 8 kube-system pods found
	I1017 21:15:34.247636  816637 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:34.247643  816637 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:34.247649  816637 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:34.247683  816637 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:34.247695  816637 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:34.247704  816637 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:34.247715  816637 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:34.247721  816637 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:15:34.247751  816637 retry.go:31] will retry after 297.746554ms: missing components: kube-dns
	I1017 21:15:34.549508  816637 system_pods.go:86] 8 kube-system pods found
	I1017 21:15:34.549551  816637 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:34.549558  816637 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:34.549564  816637 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:34.549568  816637 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:34.549573  816637 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:34.549578  816637 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:34.549582  816637 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:34.549587  816637 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:15:34.549603  816637 retry.go:31] will retry after 523.492564ms: missing components: kube-dns
	I1017 21:15:35.078752  816637 system_pods.go:86] 8 kube-system pods found
	I1017 21:15:35.078798  816637 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:15:35.078809  816637 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running
	I1017 21:15:35.078815  816637 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:15:35.078820  816637 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running
	I1017 21:15:35.078844  816637 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running
	I1017 21:15:35.078855  816637 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:15:35.078860  816637 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running
	I1017 21:15:35.078864  816637 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Running
	I1017 21:15:35.078871  816637 system_pods.go:126] duration metric: took 1.315962784s to wait for k8s-apps to be running ...
	I1017 21:15:35.078885  816637 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:15:35.078947  816637 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:15:35.099530  816637 system_svc.go:56] duration metric: took 20.636896ms WaitForService to wait for kubelet
	I1017 21:15:35.099565  816637 kubeadm.go:586] duration metric: took 42.779538425s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:15:35.099587  816637 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:15:35.110058  816637 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:15:35.110126  816637 node_conditions.go:123] node cpu capacity is 2
	I1017 21:15:35.110145  816637 node_conditions.go:105] duration metric: took 10.551473ms to run NodePressure ...
	I1017 21:15:35.110159  816637 start.go:241] waiting for startup goroutines ...
	I1017 21:15:35.110169  816637 start.go:246] waiting for cluster config update ...
	I1017 21:15:35.110180  816637 start.go:255] writing updated cluster config ...
	I1017 21:15:35.110516  816637 ssh_runner.go:195] Run: rm -f paused
	I1017 21:15:35.115714  816637 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:15:35.120540  816637 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7c4gn" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 21:15:37.127199  816637 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:15:37.151062  820922 out.go:252]   - Configuring RBAC rules ...
	I1017 21:15:37.151220  820922 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:15:37.156009  820922 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:15:37.168623  820922 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:15:37.175650  820922 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:15:37.181106  820922 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:15:37.185892  820922 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:15:37.480244  820922 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:15:37.948313  820922 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:15:38.480520  820922 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:15:38.481668  820922 kubeadm.go:318] 
	I1017 21:15:38.481749  820922 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:15:38.481760  820922 kubeadm.go:318] 
	I1017 21:15:38.481841  820922 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:15:38.481850  820922 kubeadm.go:318] 
	I1017 21:15:38.481877  820922 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:15:38.481943  820922 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:15:38.482000  820922 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:15:38.482008  820922 kubeadm.go:318] 
	I1017 21:15:38.482064  820922 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:15:38.482072  820922 kubeadm.go:318] 
	I1017 21:15:38.482122  820922 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:15:38.482131  820922 kubeadm.go:318] 
	I1017 21:15:38.482185  820922 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:15:38.482266  820922 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:15:38.482350  820922 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:15:38.482359  820922 kubeadm.go:318] 
	I1017 21:15:38.482447  820922 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:15:38.482542  820922 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:15:38.482556  820922 kubeadm.go:318] 
	I1017 21:15:38.482645  820922 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token 0padp0.68tb9hgkgv69cwno \
	I1017 21:15:38.482763  820922 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:15:38.482791  820922 kubeadm.go:318] 	--control-plane 
	I1017 21:15:38.482795  820922 kubeadm.go:318] 
	I1017 21:15:38.482884  820922 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:15:38.482892  820922 kubeadm.go:318] 
	I1017 21:15:38.482979  820922 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token 0padp0.68tb9hgkgv69cwno \
	I1017 21:15:38.483143  820922 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:15:38.487896  820922 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:15:38.488144  820922 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:15:38.488301  820922 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 21:15:38.488337  820922 cni.go:84] Creating CNI manager for ""
	I1017 21:15:38.488348  820922 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:15:38.493255  820922 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:15:38.496156  820922 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:15:38.500439  820922 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:15:38.500516  820922 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:15:38.515290  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:15:38.841131  820922 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:15:38.841275  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:38.841367  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-332023 minikube.k8s.io/updated_at=2025_10_17T21_15_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=default-k8s-diff-port-332023 minikube.k8s.io/primary=true
	I1017 21:15:39.017710  820922 ops.go:34] apiserver oom_adj: -16
	I1017 21:15:39.017833  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:39.518584  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1017 21:15:39.625744  816637 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:15:41.626144  816637 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:15:40.018905  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:40.517972  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:41.017904  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:41.518512  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:42.018511  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:42.517921  820922 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:15:42.611835  820922 kubeadm.go:1113] duration metric: took 3.770601864s to wait for elevateKubeSystemPrivileges
	I1017 21:15:42.611869  820922 kubeadm.go:402] duration metric: took 21.978984063s to StartCluster
	I1017 21:15:42.611888  820922 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:42.611961  820922 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:15:42.613490  820922 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:15:42.613735  820922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:15:42.613748  820922 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:15:42.613985  820922 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:15:42.614019  820922 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:15:42.614082  820922 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-332023"
	I1017 21:15:42.614092  820922 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-332023"
	I1017 21:15:42.614098  820922 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-332023"
	I1017 21:15:42.614110  820922 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-332023"
	I1017 21:15:42.614120  820922 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:15:42.614455  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:42.614602  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:42.621639  820922 out.go:179] * Verifying Kubernetes components...
	I1017 21:15:42.624872  820922 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:15:42.657880  820922 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-332023"
	I1017 21:15:42.657919  820922 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:15:42.658377  820922 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:15:42.666971  820922 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:15:42.669830  820922 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:15:42.669858  820922 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:15:42.669933  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:42.693451  820922 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:15:42.693474  820922 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:15:42.693535  820922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:15:42.708882  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:42.731315  820922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33849 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:15:42.881922  820922 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:15:42.888667  820922 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:15:42.901831  820922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:15:42.928418  820922 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:15:43.572691  820922 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1017 21:15:43.579336  820922 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:15:43.838062  820922 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1017 21:15:43.840914  820922 addons.go:514] duration metric: took 1.226873925s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1017 21:15:44.078279  820922 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-332023" context rescaled to 1 replicas
	W1017 21:15:43.626475  816637 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:15:45.128072  816637 pod_ready.go:94] pod "coredns-66bc5c9577-7c4gn" is "Ready"
	I1017 21:15:45.128108  816637 pod_ready.go:86] duration metric: took 10.007532607s for pod "coredns-66bc5c9577-7c4gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.132362  816637 pod_ready.go:83] waiting for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.139197  816637 pod_ready.go:94] pod "etcd-embed-certs-629583" is "Ready"
	I1017 21:15:45.139232  816637 pod_ready.go:86] duration metric: took 6.82905ms for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.142832  816637 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.149595  816637 pod_ready.go:94] pod "kube-apiserver-embed-certs-629583" is "Ready"
	I1017 21:15:45.149632  816637 pod_ready.go:86] duration metric: took 6.768274ms for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.152941  816637 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.325212  816637 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629583" is "Ready"
	I1017 21:15:45.325246  816637 pod_ready.go:86] duration metric: took 172.273877ms for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.524455  816637 pod_ready.go:83] waiting for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:45.924488  816637 pod_ready.go:94] pod "kube-proxy-p98l2" is "Ready"
	I1017 21:15:45.924519  816637 pod_ready.go:86] duration metric: took 400.036028ms for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:46.123732  816637 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:46.523615  816637 pod_ready.go:94] pod "kube-scheduler-embed-certs-629583" is "Ready"
	I1017 21:15:46.523641  816637 pod_ready.go:86] duration metric: took 399.882828ms for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:15:46.523654  816637 pod_ready.go:40] duration metric: took 11.407899405s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:15:46.574660  816637 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:15:46.577726  816637 out.go:179] * Done! kubectl is now configured to use "embed-certs-629583" cluster and "default" namespace by default
	W1017 21:15:45.582753  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:15:47.582912  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:15:49.583238  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:15:52.082132  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:15:54.082194  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Oct 17 21:15:33 embed-certs-629583 crio[841]: time="2025-10-17T21:15:33.839937985Z" level=info msg="Created container 52d780d3dd5da7e78009162ae4db2466cc4421675a753d3ddb6530617125e24a: kube-system/coredns-66bc5c9577-7c4gn/coredns" id=05250650-14f3-454d-ac9a-8ec456bd849e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:15:33 embed-certs-629583 crio[841]: time="2025-10-17T21:15:33.840882483Z" level=info msg="Starting container: 52d780d3dd5da7e78009162ae4db2466cc4421675a753d3ddb6530617125e24a" id=52109c0c-5202-4515-984a-474d67f2e07c name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:15:33 embed-certs-629583 crio[841]: time="2025-10-17T21:15:33.850493889Z" level=info msg="Started container" PID=1734 containerID=52d780d3dd5da7e78009162ae4db2466cc4421675a753d3ddb6530617125e24a description=kube-system/coredns-66bc5c9577-7c4gn/coredns id=52109c0c-5202-4515-984a-474d67f2e07c name=/runtime.v1.RuntimeService/StartContainer sandboxID=0631f3c1ae7db9ec331cea2c6c9867818e5cc4357ce1a7dfec398b87bdd2f1c0
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.092752189Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ff22058b-a758-4431-846d-e9538173bc99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.092830451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.098149162Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:46dedfc4318b2f67ee86f7969e40aab9bdf4e78eeac3293aa98ce6a84ad28a6e UID:f7049e1c-d0c1-4766-8d4b-56f73f9c82db NetNS:/var/run/netns/387a9a9c-daa6-4bb5-a60a-bfe5808071f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cad0}] Aliases:map[]}"
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.098324451Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.108950353Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:46dedfc4318b2f67ee86f7969e40aab9bdf4e78eeac3293aa98ce6a84ad28a6e UID:f7049e1c-d0c1-4766-8d4b-56f73f9c82db NetNS:/var/run/netns/387a9a9c-daa6-4bb5-a60a-bfe5808071f9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012cad0}] Aliases:map[]}"
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.109103652Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.112008947Z" level=info msg="Ran pod sandbox 46dedfc4318b2f67ee86f7969e40aab9bdf4e78eeac3293aa98ce6a84ad28a6e with infra container: default/busybox/POD" id=ff22058b-a758-4431-846d-e9538173bc99 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.11712188Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=81b57a12-7805-4acd-8f52-db75279d4a9f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.117315893Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=81b57a12-7805-4acd-8f52-db75279d4a9f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.117371212Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=81b57a12-7805-4acd-8f52-db75279d4a9f name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.120872996Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7f683ec4-43cf-4b37-80e2-5abb11eccb29 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:15:47 embed-certs-629583 crio[841]: time="2025-10-17T21:15:47.124407166Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.207133749Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7f683ec4-43cf-4b37-80e2-5abb11eccb29 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.20788914Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=38030657-812a-4664-8047-ef1fc3b9b803 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.209446896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c398f45c-3b78-423c-b5fb-afe367d5c5de name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.216611902Z" level=info msg="Creating container: default/busybox/busybox" id=dc295cc5-4661-43e6-a4c9-1b7bdfbc8136 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.21743465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.222283693Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.222772611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.23780609Z" level=info msg="Created container f26c80491ec1fb9b89d1ef726dd32fec7f291b1cee9f4e67c1f045c3f737c19c: default/busybox/busybox" id=dc295cc5-4661-43e6-a4c9-1b7bdfbc8136 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.239322886Z" level=info msg="Starting container: f26c80491ec1fb9b89d1ef726dd32fec7f291b1cee9f4e67c1f045c3f737c19c" id=30758039-d102-4c42-bf78-60435a931e19 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:15:49 embed-certs-629583 crio[841]: time="2025-10-17T21:15:49.241185082Z" level=info msg="Started container" PID=1791 containerID=f26c80491ec1fb9b89d1ef726dd32fec7f291b1cee9f4e67c1f045c3f737c19c description=default/busybox/busybox id=30758039-d102-4c42-bf78-60435a931e19 name=/runtime.v1.RuntimeService/StartContainer sandboxID=46dedfc4318b2f67ee86f7969e40aab9bdf4e78eeac3293aa98ce6a84ad28a6e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	f26c80491ec1f       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago        Running             busybox                   0                   46dedfc4318b2       busybox                                      default
	52d780d3dd5da       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      23 seconds ago       Running             coredns                   0                   0631f3c1ae7db       coredns-66bc5c9577-7c4gn                     kube-system
	12fea78924112       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      23 seconds ago       Running             storage-provisioner       0                   b204094b88edf       storage-provisioner                          kube-system
	308d14dfc356b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      About a minute ago   Running             kube-proxy                0                   76147fcc4460e       kube-proxy-p98l2                             kube-system
	312f8a7e4941c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      About a minute ago   Running             kindnet-cni               0                   4e6936d86419f       kindnet-tqd9k                                kube-system
	45232b837acd3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   7e1ee49cfe828       kube-scheduler-embed-certs-629583            kube-system
	842fb0c44d25d       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   c404a38b30194       kube-apiserver-embed-certs-629583            kube-system
	d3d784bed927e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   cab32c8599285       kube-controller-manager-embed-certs-629583   kube-system
	0f3b0f78bffa1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   f9effc2d9c5ef       etcd-embed-certs-629583                      kube-system
	
	
	==> coredns [52d780d3dd5da7e78009162ae4db2466cc4421675a753d3ddb6530617125e24a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42978 - 34063 "HINFO IN 6580998683537797042.7608339989878820990. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013466467s
	
	
	==> describe nodes <==
	Name:               embed-certs-629583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-629583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_14_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:14:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629583
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:15:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:15:33 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:15:33 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:15:33 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:15:33 +0000   Fri, 17 Oct 2025 21:15:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c9c2881c-9e92-49bc-ace3-9a4a72830c65
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-7c4gn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     65s
	  kube-system                 etcd-embed-certs-629583                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         70s
	  kube-system                 kindnet-tqd9k                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      65s
	  kube-system                 kube-apiserver-embed-certs-629583             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-embed-certs-629583    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-p98l2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-embed-certs-629583             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 64s                kube-proxy       
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s (x8 over 78s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s                kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s                kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s                kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                node-controller  Node embed-certs-629583 event: Registered Node embed-certs-629583 in Controller
	  Normal   NodeReady                24s                kubelet          Node embed-certs-629583 status is now: NodeReady
	
	
	==> dmesg <==
	[ +44.773771] overlayfs: idmapped layers are currently not supported
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0f3b0f78bffa16bda0883d4940e23300b2f6229d9b6aabaedaaaf2e673ebbe9b] <==
	{"level":"warn","ts":"2025-10-17T21:14:42.495621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.549205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.579898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.643071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.697120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.733211Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.750308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.774155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.803887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.874122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.917435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:42.970720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.000446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.025203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.047395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.061125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.081135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.109887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.124058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.178865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.220545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.246513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.267796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.283910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:14:43.335551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42384","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:15:57 up  3:58,  0 user,  load average: 2.26, 3.32, 3.09
	Linux embed-certs-629583 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [312f8a7e4941cfc7f33f5b1469824b941c389f98e59d6a1eb7e7c916500b5c38] <==
	I1017 21:14:52.732697       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:14:52.732915       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:14:52.733035       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:14:52.733045       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:14:52.733057       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:14:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:14:52.934287       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:14:52.934312       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:14:52.934321       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:14:52.934418       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:15:22.934765       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:15:22.934767       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:15:22.934889       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:15:22.938962       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1017 21:15:24.435138       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:15:24.435181       1 metrics.go:72] Registering metrics
	I1017 21:15:24.435255       1 controller.go:711] "Syncing nftables rules"
	I1017 21:15:32.935165       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:15:32.935239       1 main.go:301] handling current node
	I1017 21:15:42.939187       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:15:42.939284       1 main.go:301] handling current node
	I1017 21:15:52.934060       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:15:52.934184       1 main.go:301] handling current node
	
	
	==> kube-apiserver [842fb0c44d25d0d2e4fbe7c1b284a01eed671c8e29786931afca097d82a09d10] <==
	I1017 21:14:44.310102       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:14:44.310166       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:14:44.317743       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	E1017 21:14:44.318728       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1017 21:14:44.335907       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:14:44.336076       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:14:44.521871       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:14:45.045029       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 21:14:45.068817       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 21:14:45.069127       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:14:45.999063       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:14:46.074165       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:14:46.163703       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:14:46.197199       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 21:14:46.205487       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 21:14:46.207170       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:14:46.217753       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:14:47.436800       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:14:47.457179       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 21:14:47.474017       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 21:14:51.668566       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:14:51.676358       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:14:52.065690       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 21:14:52.176858       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1017 21:15:55.924078       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:60212: use of closed network connection
	
	
	==> kube-controller-manager [d3d784bed927e87edbc83d71ce3d43c4eac4cbac8b5e97f8769800474e9766a7] <==
	I1017 21:14:51.195343       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 21:14:51.198273       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:14:51.207266       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:14:51.208481       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:14:51.209946       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 21:14:51.210024       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:14:51.210260       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:14:51.210827       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:14:51.211081       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:14:51.211307       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:14:51.212535       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:14:51.212603       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 21:14:51.212673       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:14:51.212704       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:14:51.212732       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:14:51.218172       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 21:14:51.219433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 21:14:51.219615       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 21:14:51.219685       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 21:14:51.219731       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 21:14:51.219768       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:14:51.219510       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:14:51.229694       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-629583" podCIDRs=["10.244.0.0/24"]
	I1017 21:14:51.241390       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:15:36.184951       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [308d14dfc356bcf166fad05cf29a0a07c5dafc97944d4f749229e509f5446a17] <==
	I1017 21:14:52.841796       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:14:52.940101       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:14:53.040758       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:14:53.040828       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:14:53.040937       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:14:53.220136       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:14:53.220202       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:14:53.230280       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:14:53.230599       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:14:53.230613       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:14:53.232077       1 config.go:200] "Starting service config controller"
	I1017 21:14:53.232088       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:14:53.232103       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:14:53.232107       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:14:53.232120       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:14:53.232124       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:14:53.233993       1 config.go:309] "Starting node config controller"
	I1017 21:14:53.234013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:14:53.234021       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:14:53.333109       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:14:53.333143       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:14:53.333180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [45232b837acd39669c7c308b69170076e02b9bf26f1446adfb8ae1452cee5fa0] <==
	E1017 21:14:44.314682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:14:44.314938       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 21:14:44.315627       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:14:44.315908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:14:44.316027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:14:44.316247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:14:44.316350       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 21:14:45.198994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 21:14:45.200469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 21:14:45.210286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:14:45.308326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:14:45.308949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 21:14:45.344020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 21:14:45.352050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 21:14:45.357924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 21:14:45.387659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1017 21:14:45.410442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:14:45.422295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 21:14:45.435417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:14:45.507322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 21:14:45.566732       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:14:45.604659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:14:45.671419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:14:45.699856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1017 21:14:47.719166       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:14:48 embed-certs-629583 kubelet[1304]: I1017 21:14:48.427063    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-629583" podStartSLOduration=1.426954337 podStartE2EDuration="1.426954337s" podCreationTimestamp="2025-10-17 21:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:14:48.42688072 +0000 UTC m=+1.188714305" watchObservedRunningTime="2025-10-17 21:14:48.426954337 +0000 UTC m=+1.188787906"
	Oct 17 21:14:48 embed-certs-629583 kubelet[1304]: I1017 21:14:48.459042    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-629583" podStartSLOduration=1.459024015 podStartE2EDuration="1.459024015s" podCreationTimestamp="2025-10-17 21:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:14:48.441722262 +0000 UTC m=+1.203555847" watchObservedRunningTime="2025-10-17 21:14:48.459024015 +0000 UTC m=+1.220857583"
	Oct 17 21:14:48 embed-certs-629583 kubelet[1304]: I1017 21:14:48.459199    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-629583" podStartSLOduration=1.459192632 podStartE2EDuration="1.459192632s" podCreationTimestamp="2025-10-17 21:14:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:14:48.458737742 +0000 UTC m=+1.220571327" watchObservedRunningTime="2025-10-17 21:14:48.459192632 +0000 UTC m=+1.221026201"
	Oct 17 21:14:51 embed-certs-629583 kubelet[1304]: I1017 21:14:51.264995    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 21:14:51 embed-certs-629583 kubelet[1304]: I1017 21:14:51.266303    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191190    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c773c4b3-7cce-47a5-b717-8b9e938d2b04-kube-proxy\") pod \"kube-proxy-p98l2\" (UID: \"c773c4b3-7cce-47a5-b717-8b9e938d2b04\") " pod="kube-system/kube-proxy-p98l2"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191256    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7396d9a-856e-48e7-ac8a-cb092406a40f-lib-modules\") pod \"kindnet-tqd9k\" (UID: \"f7396d9a-856e-48e7-ac8a-cb092406a40f\") " pod="kube-system/kindnet-tqd9k"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191280    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7396d9a-856e-48e7-ac8a-cb092406a40f-xtables-lock\") pod \"kindnet-tqd9k\" (UID: \"f7396d9a-856e-48e7-ac8a-cb092406a40f\") " pod="kube-system/kindnet-tqd9k"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191345    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c773c4b3-7cce-47a5-b717-8b9e938d2b04-xtables-lock\") pod \"kube-proxy-p98l2\" (UID: \"c773c4b3-7cce-47a5-b717-8b9e938d2b04\") " pod="kube-system/kube-proxy-p98l2"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191389    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9fbp\" (UniqueName: \"kubernetes.io/projected/f7396d9a-856e-48e7-ac8a-cb092406a40f-kube-api-access-j9fbp\") pod \"kindnet-tqd9k\" (UID: \"f7396d9a-856e-48e7-ac8a-cb092406a40f\") " pod="kube-system/kindnet-tqd9k"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191414    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c773c4b3-7cce-47a5-b717-8b9e938d2b04-lib-modules\") pod \"kube-proxy-p98l2\" (UID: \"c773c4b3-7cce-47a5-b717-8b9e938d2b04\") " pod="kube-system/kube-proxy-p98l2"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191430    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrxjb\" (UniqueName: \"kubernetes.io/projected/c773c4b3-7cce-47a5-b717-8b9e938d2b04-kube-api-access-vrxjb\") pod \"kube-proxy-p98l2\" (UID: \"c773c4b3-7cce-47a5-b717-8b9e938d2b04\") " pod="kube-system/kube-proxy-p98l2"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.191491    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f7396d9a-856e-48e7-ac8a-cb092406a40f-cni-cfg\") pod \"kindnet-tqd9k\" (UID: \"f7396d9a-856e-48e7-ac8a-cb092406a40f\") " pod="kube-system/kindnet-tqd9k"
	Oct 17 21:14:52 embed-certs-629583 kubelet[1304]: I1017 21:14:52.305858    1304 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:14:53 embed-certs-629583 kubelet[1304]: I1017 21:14:53.597800    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tqd9k" podStartSLOduration=1.597782881 podStartE2EDuration="1.597782881s" podCreationTimestamp="2025-10-17 21:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:14:53.593830539 +0000 UTC m=+6.355664124" watchObservedRunningTime="2025-10-17 21:14:53.597782881 +0000 UTC m=+6.359616450"
	Oct 17 21:14:53 embed-certs-629583 kubelet[1304]: I1017 21:14:53.969002    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-p98l2" podStartSLOduration=1.968982636 podStartE2EDuration="1.968982636s" podCreationTimestamp="2025-10-17 21:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:14:53.611916458 +0000 UTC m=+6.373750035" watchObservedRunningTime="2025-10-17 21:14:53.968982636 +0000 UTC m=+6.730816213"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: I1017 21:15:33.314704    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: I1017 21:15:33.413410    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ae808830-0756-42b5-8463-1fb837a1c9b4-tmp\") pod \"storage-provisioner\" (UID: \"ae808830-0756-42b5-8463-1fb837a1c9b4\") " pod="kube-system/storage-provisioner"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: I1017 21:15:33.413478    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpswf\" (UniqueName: \"kubernetes.io/projected/ae808830-0756-42b5-8463-1fb837a1c9b4-kube-api-access-vpswf\") pod \"storage-provisioner\" (UID: \"ae808830-0756-42b5-8463-1fb837a1c9b4\") " pod="kube-system/storage-provisioner"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: I1017 21:15:33.413504    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3291ddc3-9d57-4caf-859f-f2c6d7d0af4b-config-volume\") pod \"coredns-66bc5c9577-7c4gn\" (UID: \"3291ddc3-9d57-4caf-859f-f2c6d7d0af4b\") " pod="kube-system/coredns-66bc5c9577-7c4gn"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: I1017 21:15:33.413530    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxvdr\" (UniqueName: \"kubernetes.io/projected/3291ddc3-9d57-4caf-859f-f2c6d7d0af4b-kube-api-access-rxvdr\") pod \"coredns-66bc5c9577-7c4gn\" (UID: \"3291ddc3-9d57-4caf-859f-f2c6d7d0af4b\") " pod="kube-system/coredns-66bc5c9577-7c4gn"
	Oct 17 21:15:33 embed-certs-629583 kubelet[1304]: W1017 21:15:33.713838    1304 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/crio-b204094b88edf7e41fde581c883b3c071742d204d9d18e5066b69b12e452be2e WatchSource:0}: Error finding container b204094b88edf7e41fde581c883b3c071742d204d9d18e5066b69b12e452be2e: Status 404 returned error can't find the container with id b204094b88edf7e41fde581c883b3c071742d204d9d18e5066b69b12e452be2e
	Oct 17 21:15:34 embed-certs-629583 kubelet[1304]: I1017 21:15:34.710950    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.710930256 podStartE2EDuration="41.710930256s" podCreationTimestamp="2025-10-17 21:14:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:15:34.690419958 +0000 UTC m=+47.452253544" watchObservedRunningTime="2025-10-17 21:15:34.710930256 +0000 UTC m=+47.472763841"
	Oct 17 21:15:44 embed-certs-629583 kubelet[1304]: I1017 21:15:44.688949    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7c4gn" podStartSLOduration=52.688932019 podStartE2EDuration="52.688932019s" podCreationTimestamp="2025-10-17 21:14:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:15:34.711929195 +0000 UTC m=+47.473762788" watchObservedRunningTime="2025-10-17 21:15:44.688932019 +0000 UTC m=+57.450765596"
	Oct 17 21:15:46 embed-certs-629583 kubelet[1304]: I1017 21:15:46.910398    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvjbf\" (UniqueName: \"kubernetes.io/projected/f7049e1c-d0c1-4766-8d4b-56f73f9c82db-kube-api-access-wvjbf\") pod \"busybox\" (UID: \"f7049e1c-d0c1-4766-8d4b-56f73f9c82db\") " pod="default/busybox"
	
	
	==> storage-provisioner [12fea7892411295c0ab0df33f9bdd4629fe4a9af816f457973a5314e80877169] <==
	W1017 21:15:33.938008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:33.944900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:15:34.031817       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629583_09c37796-940c-4ca2-9d15-cbd17d6c9de0!
	W1017 21:15:35.947660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:35.952762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:37.956407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:37.961146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:39.963710       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:39.968186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:41.971741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:41.994828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:44.004734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:44.028126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:46.031304       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:46.038093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:48.044846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:48.050877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:50.055825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:50.064047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:52.066430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:52.071039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:54.074895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:54.078999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:56.083697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:15:56.091506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (329.971144ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:16:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-332023 describe deploy/metrics-server -n kube-system: exit status 1 (104.814272ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-332023 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-332023
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-332023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	        "Created": "2025-10-17T21:15:10.315339717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 821312,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:15:10.376743296Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98-json.log",
	        "Name": "/default-k8s-diff-port-332023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-332023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-332023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	                "LowerDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-332023",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-332023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-332023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd46ec91157c668347bec3458e0a65b70e4f8fe00a2c02c9c5e4d082be821f1e",
	            "SandboxKey": "/var/run/docker/netns/cd46ec91157c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33849"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33850"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33853"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33851"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33852"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-332023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "36:89:7c:11:43:d9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26a553f884b09380ce04b950347080a804cedc891493065a8f217a57e449901d",
	                    "EndpointID": "d828dd06acbf889dfe7f43a43436daf83c1135d426a32903e03a2e37d30ada95",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-332023",
	                        "cbf8d10c5cde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25: (1.550691562s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-521710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │                     │
	│ stop    │ -p old-k8s-version-521710 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:12 UTC │
	│ start   │ -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:12 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                                                                                               │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:16:11
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:16:11.056562  824247 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:16:11.056798  824247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:11.056818  824247 out.go:374] Setting ErrFile to fd 2...
	I1017 21:16:11.056836  824247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:11.057163  824247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:16:11.057700  824247 out.go:368] Setting JSON to false
	I1017 21:16:11.058796  824247 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14317,"bootTime":1760721454,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:16:11.058876  824247 start.go:141] virtualization:  
	I1017 21:16:11.062608  824247 out.go:179] * [embed-certs-629583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:16:11.065690  824247 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:16:11.065818  824247 notify.go:220] Checking for updates...
	I1017 21:16:11.071548  824247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:16:11.074633  824247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:11.077588  824247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:16:11.081185  824247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:16:11.084256  824247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:16:11.087671  824247 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:11.088272  824247 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:16:11.118385  824247 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:16:11.118504  824247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:11.188510  824247 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:11.179266811 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:11.188623  824247 docker.go:318] overlay module found
	I1017 21:16:11.191905  824247 out.go:179] * Using the docker driver based on existing profile
	I1017 21:16:11.194840  824247 start.go:305] selected driver: docker
	I1017 21:16:11.194860  824247 start.go:925] validating driver "docker" against &{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:11.194972  824247 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:16:11.195706  824247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:11.257118  824247 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:11.248025733 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:11.257609  824247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:16:11.257645  824247 cni.go:84] Creating CNI manager for ""
	I1017 21:16:11.257705  824247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:11.257740  824247 start.go:349] cluster config:
	{Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:11.260998  824247 out.go:179] * Starting "embed-certs-629583" primary control-plane node in "embed-certs-629583" cluster
	I1017 21:16:11.264639  824247 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:16:11.267497  824247 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:16:11.270517  824247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:11.270578  824247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:16:11.270595  824247 cache.go:58] Caching tarball of preloaded images
	I1017 21:16:11.270638  824247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:16:11.270711  824247 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:16:11.270722  824247 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:16:11.270828  824247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:16:11.291166  824247 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:16:11.291188  824247 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:16:11.291211  824247 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:16:11.291235  824247 start.go:360] acquireMachinesLock for embed-certs-629583: {Name:mk04401a4732e984651d3d859464878000ecb8c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:16:11.291307  824247 start.go:364] duration metric: took 54.508µs to acquireMachinesLock for "embed-certs-629583"
	I1017 21:16:11.291328  824247 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:16:11.291333  824247 fix.go:54] fixHost starting: 
	I1017 21:16:11.291589  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:11.308774  824247 fix.go:112] recreateIfNeeded on embed-certs-629583: state=Stopped err=<nil>
	W1017 21:16:11.308804  824247 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 21:16:11.082646  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:16:13.083215  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	I1017 21:16:11.311975  824247 out.go:252] * Restarting existing docker container for "embed-certs-629583" ...
	I1017 21:16:11.312059  824247 cli_runner.go:164] Run: docker start embed-certs-629583
	I1017 21:16:11.564054  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:11.589310  824247 kic.go:430] container "embed-certs-629583" state is running.
	I1017 21:16:11.589694  824247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:16:11.612831  824247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/config.json ...
	I1017 21:16:11.613165  824247 machine.go:93] provisionDockerMachine start ...
	I1017 21:16:11.613308  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:11.638010  824247 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:11.638337  824247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33854 <nil> <nil>}
	I1017 21:16:11.638348  824247 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:16:11.639444  824247 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:16:14.790533  824247 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:16:14.790576  824247 ubuntu.go:182] provisioning hostname "embed-certs-629583"
	I1017 21:16:14.790684  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:14.808834  824247 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:14.809147  824247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33854 <nil> <nil>}
	I1017 21:16:14.809165  824247 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-629583 && echo "embed-certs-629583" | sudo tee /etc/hostname
	I1017 21:16:14.964643  824247 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-629583
	
	I1017 21:16:14.964758  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:14.982419  824247 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:14.982768  824247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33854 <nil> <nil>}
	I1017 21:16:14.982790  824247 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-629583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-629583/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-629583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:16:15.144911  824247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:16:15.144991  824247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:16:15.145064  824247 ubuntu.go:190] setting up certificates
	I1017 21:16:15.145095  824247 provision.go:84] configureAuth start
	I1017 21:16:15.145198  824247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:16:15.166802  824247 provision.go:143] copyHostCerts
	I1017 21:16:15.166871  824247 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:16:15.166890  824247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:16:15.166967  824247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:16:15.167078  824247 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:16:15.167083  824247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:16:15.167148  824247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:16:15.167220  824247 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:16:15.167225  824247 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:16:15.167253  824247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:16:15.167312  824247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.embed-certs-629583 san=[127.0.0.1 192.168.76.2 embed-certs-629583 localhost minikube]
	I1017 21:16:16.109276  824247 provision.go:177] copyRemoteCerts
	I1017 21:16:16.109346  824247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:16:16.109402  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.129424  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:16.235186  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:16:16.254076  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1017 21:16:16.274259  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:16:16.292912  824247 provision.go:87] duration metric: took 1.147786592s to configureAuth
	I1017 21:16:16.292977  824247 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:16:16.293196  824247 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:16.293316  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.314579  824247 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:16.314913  824247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33854 <nil> <nil>}
	I1017 21:16:16.314928  824247 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:16:16.657030  824247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:16:16.657054  824247 machine.go:96] duration metric: took 5.043876253s to provisionDockerMachine
	I1017 21:16:16.657066  824247 start.go:293] postStartSetup for "embed-certs-629583" (driver="docker")
	I1017 21:16:16.657077  824247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:16:16.657137  824247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:16:16.657182  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.678798  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:16.795477  824247 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:16:16.798883  824247 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:16:16.798914  824247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:16:16.798925  824247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:16:16.798981  824247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:16:16.799070  824247 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:16:16.799225  824247 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:16:16.806842  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:16.826316  824247 start.go:296] duration metric: took 169.234886ms for postStartSetup
	I1017 21:16:16.826392  824247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:16:16.826430  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.845343  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:16.948220  824247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:16:16.953094  824247 fix.go:56] duration metric: took 5.661753451s for fixHost
	I1017 21:16:16.953116  824247 start.go:83] releasing machines lock for "embed-certs-629583", held for 5.661800909s
	I1017 21:16:16.953197  824247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-629583
	I1017 21:16:16.969679  824247 ssh_runner.go:195] Run: cat /version.json
	I1017 21:16:16.969749  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.970017  824247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:16:16.970093  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:16.988902  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:16.997582  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:17.099001  824247 ssh_runner.go:195] Run: systemctl --version
	I1017 21:16:17.191805  824247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:16:17.227796  824247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:16:17.232635  824247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:16:17.232706  824247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:16:17.240652  824247 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:16:17.240676  824247 start.go:495] detecting cgroup driver to use...
	I1017 21:16:17.240708  824247 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:16:17.240766  824247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:16:17.256668  824247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:16:17.270395  824247 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:16:17.270508  824247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:16:17.288396  824247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:16:17.302505  824247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:16:17.423537  824247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:16:17.542910  824247 docker.go:234] disabling docker service ...
	I1017 21:16:17.542974  824247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:16:17.558427  824247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:16:17.571779  824247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:16:17.689279  824247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:16:17.810331  824247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:16:17.823289  824247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:16:17.847587  824247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:16:17.847673  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.856767  824247 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:16:17.856842  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.865656  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.874898  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.884075  824247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:16:17.892232  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.905565  824247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.915245  824247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:17.924485  824247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:16:17.932299  824247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:16:17.939840  824247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:18.072595  824247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:16:18.232175  824247 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:16:18.232326  824247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:16:18.236545  824247 start.go:563] Will wait 60s for crictl version
	I1017 21:16:18.236626  824247 ssh_runner.go:195] Run: which crictl
	I1017 21:16:18.246934  824247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:16:18.271731  824247 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:16:18.271822  824247 ssh_runner.go:195] Run: crio --version
	I1017 21:16:18.305323  824247 ssh_runner.go:195] Run: crio --version
	I1017 21:16:18.340976  824247 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:16:18.343755  824247 cli_runner.go:164] Run: docker network inspect embed-certs-629583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:16:18.360894  824247 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:16:18.364729  824247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:18.374248  824247 kubeadm.go:883] updating cluster {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:16:18.374362  824247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:18.374429  824247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:18.410819  824247 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:18.410841  824247 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:16:18.410897  824247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:18.441588  824247 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:18.441612  824247 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:16:18.441621  824247 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:16:18.441727  824247 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-629583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:16:18.441815  824247 ssh_runner.go:195] Run: crio config
	I1017 21:16:18.516823  824247 cni.go:84] Creating CNI manager for ""
	I1017 21:16:18.516844  824247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:18.516858  824247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:16:18.516902  824247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-629583 NodeName:embed-certs-629583 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:16:18.517071  824247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-629583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:16:18.517146  824247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:16:18.524866  824247 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:16:18.524986  824247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:16:18.532206  824247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1017 21:16:18.545127  824247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:16:18.557996  824247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1017 21:16:18.571156  824247 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:16:18.574852  824247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:18.586854  824247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:18.699456  824247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:18.715370  824247 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583 for IP: 192.168.76.2
	I1017 21:16:18.715389  824247 certs.go:195] generating shared ca certs ...
	I1017 21:16:18.715405  824247 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:18.715578  824247 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:16:18.715648  824247 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:16:18.715662  824247 certs.go:257] generating profile certs ...
	I1017 21:16:18.715767  824247 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/client.key
	I1017 21:16:18.715853  824247 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key.d9e5dc6a
	I1017 21:16:18.715918  824247 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key
	I1017 21:16:18.716105  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:16:18.716160  824247 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:16:18.716172  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:16:18.716198  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:16:18.716244  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:16:18.716274  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:16:18.716341  824247 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:18.716966  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:16:18.736269  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:16:18.753391  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:16:18.772097  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:16:18.789572  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1017 21:16:18.806948  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:16:18.825371  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:16:18.852636  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/embed-certs-629583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:16:18.876272  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:16:18.908198  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:16:18.937261  824247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:16:18.955229  824247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:16:18.971714  824247 ssh_runner.go:195] Run: openssl version
	I1017 21:16:18.982402  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:16:18.992259  824247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:16:18.995912  824247 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:16:18.995993  824247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:16:19.047331  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:16:19.055919  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:16:19.065085  824247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:19.069040  824247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:19.069137  824247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:19.111459  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:16:19.119683  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:16:19.128353  824247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:16:19.133631  824247 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:16:19.133699  824247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:16:19.176571  824247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:16:19.184663  824247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:16:19.188477  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:16:19.229426  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:16:19.270805  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:16:19.315253  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:16:19.367910  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:16:19.428283  824247 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:16:19.502132  824247 kubeadm.go:400] StartCluster: {Name:embed-certs-629583 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-629583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:19.502274  824247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:16:19.502353  824247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:16:19.561087  824247 cri.go:89] found id: "d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3"
	I1017 21:16:19.561154  824247 cri.go:89] found id: "0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0"
	I1017 21:16:19.561173  824247 cri.go:89] found id: "68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004"
	I1017 21:16:19.561204  824247 cri.go:89] found id: "024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b"
	I1017 21:16:19.561225  824247 cri.go:89] found id: ""
	I1017 21:16:19.561307  824247 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:16:19.579752  824247 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:16:19Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:16:19.579878  824247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:16:19.592633  824247 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:16:19.592696  824247 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:16:19.592763  824247 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:16:19.602619  824247 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:16:19.603255  824247 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-629583" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:19.603548  824247 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-629583" cluster setting kubeconfig missing "embed-certs-629583" context setting]
	I1017 21:16:19.604016  824247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:19.605658  824247 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:16:19.619183  824247 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 21:16:19.619257  824247 kubeadm.go:601] duration metric: took 26.54028ms to restartPrimaryControlPlane
	I1017 21:16:19.619281  824247 kubeadm.go:402] duration metric: took 117.157449ms to StartCluster
	I1017 21:16:19.619319  824247 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:19.619394  824247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:19.620654  824247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:19.620916  824247 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:16:19.621406  824247 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:16:19.621515  824247 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-629583"
	I1017 21:16:19.621544  824247 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-629583"
	W1017 21:16:19.621576  824247 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:16:19.621617  824247 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:16:19.622143  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:19.622355  824247 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:19.622436  824247 addons.go:69] Setting dashboard=true in profile "embed-certs-629583"
	I1017 21:16:19.622471  824247 addons.go:238] Setting addon dashboard=true in "embed-certs-629583"
	W1017 21:16:19.622494  824247 addons.go:247] addon dashboard should already be in state true
	I1017 21:16:19.622536  824247 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:16:19.622965  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:19.625300  824247 addons.go:69] Setting default-storageclass=true in profile "embed-certs-629583"
	I1017 21:16:19.625450  824247 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-629583"
	I1017 21:16:19.625809  824247 out.go:179] * Verifying Kubernetes components...
	I1017 21:16:19.626395  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:19.632154  824247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:19.679048  824247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:16:19.679191  824247 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:16:19.682158  824247 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:19.682181  824247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:16:19.682248  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:19.685424  824247 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1017 21:16:15.582667  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:16:17.583295  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	I1017 21:16:19.689639  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:16:19.689665  824247 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:16:19.689732  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:19.689977  824247 addons.go:238] Setting addon default-storageclass=true in "embed-certs-629583"
	W1017 21:16:19.690000  824247 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:16:19.690031  824247 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:16:19.690462  824247 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:16:19.743885  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:19.755333  824247 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:19.755353  824247 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:16:19.755421  824247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:16:19.773215  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:19.788991  824247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:16:19.993883  824247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:20.013554  824247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:20.027447  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:16:20.027526  824247 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:16:20.088278  824247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:20.091332  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:16:20.091407  824247 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:16:20.162036  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:16:20.162108  824247 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:16:20.257406  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:16:20.257429  824247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:16:20.312683  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:16:20.312708  824247 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:16:20.335482  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:16:20.335507  824247 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:16:20.353267  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:16:20.353292  824247 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:16:20.372777  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:16:20.372813  824247 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:16:20.400675  824247 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:16:20.400702  824247 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:16:20.425746  824247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1017 21:16:20.082851  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	W1017 21:16:22.582554  820922 node_ready.go:57] node "default-k8s-diff-port-332023" has "Ready":"False" status (will retry)
	I1017 21:16:24.590056  820922 node_ready.go:49] node "default-k8s-diff-port-332023" is "Ready"
	I1017 21:16:24.590090  820922 node_ready.go:38] duration metric: took 41.010673941s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:16:24.590105  820922 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:16:24.590163  820922 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:16:24.617192  820922 api_server.go:72] duration metric: took 42.003412639s to wait for apiserver process to appear ...
	I1017 21:16:24.617224  820922 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:16:24.617247  820922 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 21:16:24.646125  820922 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 21:16:24.647429  820922 api_server.go:141] control plane version: v1.34.1
	I1017 21:16:24.647460  820922 api_server.go:131] duration metric: took 30.228347ms to wait for apiserver health ...
	I1017 21:16:24.647489  820922 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:16:24.688467  820922 system_pods.go:59] 8 kube-system pods found
	I1017 21:16:24.688503  820922 system_pods.go:61] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:16:24.688511  820922 system_pods.go:61] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running
	I1017 21:16:24.688518  820922 system_pods.go:61] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:16:24.688522  820922 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running
	I1017 21:16:24.688526  820922 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running
	I1017 21:16:24.688531  820922 system_pods.go:61] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:16:24.688536  820922 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running
	I1017 21:16:24.688542  820922 system_pods.go:61] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:16:24.688548  820922 system_pods.go:74] duration metric: took 41.050897ms to wait for pod list to return data ...
	I1017 21:16:24.688556  820922 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:16:24.693114  820922 default_sa.go:45] found service account: "default"
	I1017 21:16:24.693146  820922 default_sa.go:55] duration metric: took 4.584178ms for default service account to be created ...
	I1017 21:16:24.693157  820922 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:16:24.800120  820922 system_pods.go:86] 8 kube-system pods found
	I1017 21:16:24.800198  820922 system_pods.go:89] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:16:24.800225  820922 system_pods.go:89] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running
	I1017 21:16:24.800246  820922 system_pods.go:89] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:16:24.800279  820922 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running
	I1017 21:16:24.800304  820922 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running
	I1017 21:16:24.800324  820922 system_pods.go:89] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:16:24.800344  820922 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running
	I1017 21:16:24.800366  820922 system_pods.go:89] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:16:24.800421  820922 retry.go:31] will retry after 194.157723ms: missing components: kube-dns
	I1017 21:16:26.869470  824247 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.855829342s)
	I1017 21:16:26.869520  824247 node_ready.go:35] waiting up to 6m0s for node "embed-certs-629583" to be "Ready" ...
	I1017 21:16:26.869841  824247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.781490069s)
	I1017 21:16:26.870140  824247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.444362058s)
	I1017 21:16:26.870286  824247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.876330983s)
	I1017 21:16:26.873155  824247 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-629583 addons enable metrics-server
	
	I1017 21:16:26.909323  824247 node_ready.go:49] node "embed-certs-629583" is "Ready"
	I1017 21:16:26.909354  824247 node_ready.go:38] duration metric: took 39.820701ms for node "embed-certs-629583" to be "Ready" ...
	I1017 21:16:26.909368  824247 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:16:26.909448  824247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:16:26.927392  824247 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 21:16:24.999495  820922 system_pods.go:86] 8 kube-system pods found
	I1017 21:16:24.999574  820922 system_pods.go:89] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:16:24.999597  820922 system_pods.go:89] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running
	I1017 21:16:24.999618  820922 system_pods.go:89] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:16:24.999652  820922 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running
	I1017 21:16:24.999680  820922 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running
	I1017 21:16:24.999702  820922 system_pods.go:89] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:16:24.999722  820922 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running
	I1017 21:16:24.999755  820922 system_pods.go:89] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1017 21:16:24.999789  820922 retry.go:31] will retry after 260.645917ms: missing components: kube-dns
	I1017 21:16:25.269068  820922 system_pods.go:86] 8 kube-system pods found
	I1017 21:16:25.269143  820922 system_pods.go:89] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Running
	I1017 21:16:25.269166  820922 system_pods.go:89] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running
	I1017 21:16:25.269191  820922 system_pods.go:89] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:16:25.269231  820922 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running
	I1017 21:16:25.269260  820922 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running
	I1017 21:16:25.269282  820922 system_pods.go:89] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:16:25.269304  820922 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running
	I1017 21:16:25.269335  820922 system_pods.go:89] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Running
	I1017 21:16:25.269362  820922 system_pods.go:126] duration metric: took 576.199076ms to wait for k8s-apps to be running ...
	I1017 21:16:25.269388  820922 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:16:25.269474  820922 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:16:25.292768  820922 system_svc.go:56] duration metric: took 23.371227ms WaitForService to wait for kubelet
	I1017 21:16:25.292844  820922 kubeadm.go:586] duration metric: took 42.679070803s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:16:25.292878  820922 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:16:25.298094  820922 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:16:25.298131  820922 node_conditions.go:123] node cpu capacity is 2
	I1017 21:16:25.298146  820922 node_conditions.go:105] duration metric: took 5.24936ms to run NodePressure ...
	I1017 21:16:25.298159  820922 start.go:241] waiting for startup goroutines ...
	I1017 21:16:25.298167  820922 start.go:246] waiting for cluster config update ...
	I1017 21:16:25.298181  820922 start.go:255] writing updated cluster config ...
	I1017 21:16:25.298493  820922 ssh_runner.go:195] Run: rm -f paused
	I1017 21:16:25.304994  820922 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:16:25.316467  820922 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.324447  820922 pod_ready.go:94] pod "coredns-66bc5c9577-nvmzl" is "Ready"
	I1017 21:16:25.324481  820922 pod_ready.go:86] duration metric: took 7.941ms for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.326837  820922 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.335369  820922 pod_ready.go:94] pod "etcd-default-k8s-diff-port-332023" is "Ready"
	I1017 21:16:25.335398  820922 pod_ready.go:86] duration metric: took 8.530563ms for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.338070  820922 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.347995  820922 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-332023" is "Ready"
	I1017 21:16:25.348030  820922 pod_ready.go:86] duration metric: took 9.931977ms for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.351941  820922 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.712076  820922 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-332023" is "Ready"
	I1017 21:16:25.712172  820922 pod_ready.go:86] duration metric: took 360.156217ms for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:25.909708  820922 pod_ready.go:83] waiting for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:26.309451  820922 pod_ready.go:94] pod "kube-proxy-rh2gh" is "Ready"
	I1017 21:16:26.309481  820922 pod_ready.go:86] duration metric: took 399.743315ms for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:26.510042  820922 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:26.913309  820922 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-332023" is "Ready"
	I1017 21:16:26.913339  820922 pod_ready.go:86] duration metric: took 403.267171ms for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:26.913352  820922 pod_ready.go:40] duration metric: took 1.608324126s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:16:27.014314  820922 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:16:27.017458  820922 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-332023" cluster and "default" namespace by default
	I1017 21:16:26.930285  824247 addons.go:514] duration metric: took 7.308867875s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 21:16:26.939723  824247 api_server.go:72] duration metric: took 7.318745222s to wait for apiserver process to appear ...
	I1017 21:16:26.939752  824247 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:16:26.939772  824247 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:16:26.955840  824247 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:16:26.955867  824247 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:16:27.440212  824247 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:16:27.452676  824247 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:16:27.454468  824247 api_server.go:141] control plane version: v1.34.1
	I1017 21:16:27.454492  824247 api_server.go:131] duration metric: took 514.73289ms to wait for apiserver health ...
	I1017 21:16:27.454502  824247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:16:27.459276  824247 system_pods.go:59] 8 kube-system pods found
	I1017 21:16:27.459312  824247 system_pods.go:61] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:16:27.459321  824247 system_pods.go:61] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:16:27.459327  824247 system_pods.go:61] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:16:27.459335  824247 system_pods.go:61] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:16:27.459341  824247 system_pods.go:61] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:16:27.459346  824247 system_pods.go:61] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:16:27.459353  824247 system_pods.go:61] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:16:27.459357  824247 system_pods.go:61] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Running
	I1017 21:16:27.459364  824247 system_pods.go:74] duration metric: took 4.855944ms to wait for pod list to return data ...
	I1017 21:16:27.459371  824247 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:16:27.463266  824247 default_sa.go:45] found service account: "default"
	I1017 21:16:27.463288  824247 default_sa.go:55] duration metric: took 3.911414ms for default service account to be created ...
	I1017 21:16:27.463297  824247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:16:27.558635  824247 system_pods.go:86] 8 kube-system pods found
	I1017 21:16:27.558744  824247 system_pods.go:89] "coredns-66bc5c9577-7c4gn" [3291ddc3-9d57-4caf-859f-f2c6d7d0af4b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:16:27.558769  824247 system_pods.go:89] "etcd-embed-certs-629583" [9a26ca13-da9f-408d-a74c-8d4cf8a0d425] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:16:27.558820  824247 system_pods.go:89] "kindnet-tqd9k" [f7396d9a-856e-48e7-ac8a-cb092406a40f] Running
	I1017 21:16:27.558845  824247 system_pods.go:89] "kube-apiserver-embed-certs-629583" [ff3f0240-f5f4-48fd-80f9-892194c31dd7] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:16:27.558873  824247 system_pods.go:89] "kube-controller-manager-embed-certs-629583" [4a2921d7-9514-4818-8561-d8b9b24267ae] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:16:27.558904  824247 system_pods.go:89] "kube-proxy-p98l2" [c773c4b3-7cce-47a5-b717-8b9e938d2b04] Running
	I1017 21:16:27.558932  824247 system_pods.go:89] "kube-scheduler-embed-certs-629583" [a1542772-9065-4b9d-9be8-b6afbaa57327] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:16:27.558954  824247 system_pods.go:89] "storage-provisioner" [ae808830-0756-42b5-8463-1fb837a1c9b4] Running
	I1017 21:16:27.558988  824247 system_pods.go:126] duration metric: took 95.68366ms to wait for k8s-apps to be running ...
	I1017 21:16:27.559011  824247 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:16:27.559120  824247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:16:27.577957  824247 system_svc.go:56] duration metric: took 18.936942ms WaitForService to wait for kubelet
	I1017 21:16:27.578036  824247 kubeadm.go:586] duration metric: took 7.957061455s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:16:27.578070  824247 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:16:27.584152  824247 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:16:27.584182  824247 node_conditions.go:123] node cpu capacity is 2
	I1017 21:16:27.584195  824247 node_conditions.go:105] duration metric: took 6.104019ms to run NodePressure ...
	I1017 21:16:27.584208  824247 start.go:241] waiting for startup goroutines ...
	I1017 21:16:27.584216  824247 start.go:246] waiting for cluster config update ...
	I1017 21:16:27.584237  824247 start.go:255] writing updated cluster config ...
	I1017 21:16:27.584508  824247 ssh_runner.go:195] Run: rm -f paused
	I1017 21:16:27.588318  824247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:16:27.595329  824247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7c4gn" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 21:16:29.600972  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:31.601564  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:33.602457  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:35.606097  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 21:16:24 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:24.659390102Z" level=info msg="Created container a11188a01d90ba84404e21790d6a3ec2cd872a53394508e6ccdb56e868c22bc7: kube-system/coredns-66bc5c9577-nvmzl/coredns" id=a0d48f4d-875d-429e-9932-dd392e07e12c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:16:24 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:24.663957254Z" level=info msg="Starting container: a11188a01d90ba84404e21790d6a3ec2cd872a53394508e6ccdb56e868c22bc7" id=81b6a0f1-1fd4-4289-bfa0-04785ba402fe name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:16:24 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:24.667096317Z" level=info msg="Started container" PID=1742 containerID=a11188a01d90ba84404e21790d6a3ec2cd872a53394508e6ccdb56e868c22bc7 description=kube-system/coredns-66bc5c9577-nvmzl/coredns id=81b6a0f1-1fd4-4289-bfa0-04785ba402fe name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6b7bf2c00408bf69fad0061f04a4782a2d4c9b032118e004b69c465673f75f9
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.636609703Z" level=info msg="Running pod sandbox: default/busybox/POD" id=4b60dd5c-be63-45e2-ab89-a464783418ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.636708232Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.651921512Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0 UID:cd3171c1-800a-494a-9758-08a92ae10d3c NetNS:/var/run/netns/2fc60c1b-b39e-4873-8939-3527b3e96353 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791b0}] Aliases:map[]}"
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.651960635Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.666580053Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0 UID:cd3171c1-800a-494a-9758-08a92ae10d3c NetNS:/var/run/netns/2fc60c1b-b39e-4873-8939-3527b3e96353 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40000791b0}] Aliases:map[]}"
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.666924698Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.670808321Z" level=info msg="Ran pod sandbox 16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0 with infra container: default/busybox/POD" id=4b60dd5c-be63-45e2-ab89-a464783418ce name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.672576312Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b89ccf1-a67d-4af7-b5fc-b8dd915f2828 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.672816767Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4b89ccf1-a67d-4af7-b5fc-b8dd915f2828 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.6729278Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4b89ccf1-a67d-4af7-b5fc-b8dd915f2828 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.676981969Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=883fc36f-1814-418f-a783-f4649c9714f1 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:16:27 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:27.680298618Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.694489278Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=883fc36f-1814-418f-a783-f4649c9714f1 name=/runtime.v1.ImageService/PullImage
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.695300604Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbb21007-0cfb-4192-aa61-4c25c65f6bb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.698504078Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2a6e15c-5e5c-4afc-87e3-bf9bcd0b9d90 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.705041161Z" level=info msg="Creating container: default/busybox/busybox" id=0fc41ad0-c53b-4a06-bfe8-eaec7c10664c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.705770952Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.710455062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.71092075Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.739093398Z" level=info msg="Created container 4d26c79125672eca823f6234e9e0d8a12d2f8f6119ddc5a8b3a27d0c2e975312: default/busybox/busybox" id=0fc41ad0-c53b-4a06-bfe8-eaec7c10664c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.742458589Z" level=info msg="Starting container: 4d26c79125672eca823f6234e9e0d8a12d2f8f6119ddc5a8b3a27d0c2e975312" id=d5d2a4d6-ffde-4bca-8596-cb2407cb4337 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:16:29 default-k8s-diff-port-332023 crio[837]: time="2025-10-17T21:16:29.744969311Z" level=info msg="Started container" PID=1793 containerID=4d26c79125672eca823f6234e9e0d8a12d2f8f6119ddc5a8b3a27d0c2e975312 description=default/busybox/busybox id=d5d2a4d6-ffde-4bca-8596-cb2407cb4337 name=/runtime.v1.RuntimeService/StartContainer sandboxID=16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	4d26c79125672       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   16d3513dec043       busybox                                                default
	a11188a01d90b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   d6b7bf2c00408       coredns-66bc5c9577-nvmzl                               kube-system
	879d1fe3a361c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   217cf80a5bc6b       storage-provisioner                                    kube-system
	68f7e1a27bb25       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   2624fa9443638       kindnet-29xbg                                          kube-system
	de3c344ff14dd       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   6c691e5eb9481       kube-proxy-rh2gh                                       kube-system
	7fd1516d65674       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   6627b55a9914e       kube-scheduler-default-k8s-diff-port-332023            kube-system
	15c47a452e154       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   8340ffcde2b4b       kube-controller-manager-default-k8s-diff-port-332023   kube-system
	76019cdc8f23c       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   e61a43fe6d440       kube-apiserver-default-k8s-diff-port-332023            kube-system
	6d1c2eca232db       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   22c3c0b70ce03       etcd-default-k8s-diff-port-332023                      kube-system
	
	
	==> coredns [a11188a01d90ba84404e21790d6a3ec2cd872a53394508e6ccdb56e868c22bc7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41965 - 58538 "HINFO IN 5439123163458431642.7882169547758305916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033544069s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-332023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-332023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-332023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-332023
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:16:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:16:24 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:16:24 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:16:24 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:16:24 +0000   Fri, 17 Oct 2025 21:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-332023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e5730e8f-3fc7-4fd8-9c01-a78f58d462d6
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-nvmzl                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     54s
	  kube-system                 etcd-default-k8s-diff-port-332023                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-29xbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-default-k8s-diff-port-332023             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-332023    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-rh2gh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-default-k8s-diff-port-332023             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           55s                node-controller  Node default-k8s-diff-port-332023 event: Registered Node default-k8s-diff-port-332023 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-332023 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct17 20:51] overlayfs: idmapped layers are currently not supported
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d1c2eca232db45c09abcef71a2ee7f242edef78e25c2a3379f02fa1582a71ab] <==
	{"level":"warn","ts":"2025-10-17T21:15:33.759299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.783684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.812813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.838946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.863311Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.883408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.911158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.934346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.968444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:33.984932Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.007761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.032174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.044268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.060774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.074782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.091564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.106329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.121681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.136724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.153036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.174706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.197886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.215637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.227617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:15:34.298036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58518","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:16:37 up  3:59,  0 user,  load average: 3.84, 3.55, 3.18
	Linux default-k8s-diff-port-332023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [68f7e1a27bb25ad3cce7a1fdfdc4c999612d59c13027461ae7e6a936c4da91fe] <==
	I1017 21:15:43.838812       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:15:43.839382       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:15:43.839573       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:15:43.839628       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:15:43.839682       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:15:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:15:44.028498       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:15:44.028575       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:15:44.028613       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:15:44.029620       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:16:14.029098       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:16:14.029222       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:16:14.030093       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:16:14.030162       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 21:16:15.529034       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:16:15.529084       1 metrics.go:72] Registering metrics
	I1017 21:16:15.529164       1 controller.go:711] "Syncing nftables rules"
	I1017 21:16:24.031209       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:16:24.031267       1 main.go:301] handling current node
	I1017 21:16:34.027787       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:16:34.027990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [76019cdc8f23c943dae85e806bef0edc7345899b4413568da3d96df5aa154ee1] <==
	I1017 21:15:35.273320       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:15:35.279441       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:15:35.331939       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:15:35.332074       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 21:15:35.348542       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:15:35.348891       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:15:35.463447       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:15:35.880817       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 21:15:35.886218       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 21:15:35.886304       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:15:36.678399       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:15:36.732436       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:15:36.884059       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 21:15:36.891158       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1017 21:15:36.892324       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:15:36.897787       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:15:37.178105       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:15:37.929660       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:15:37.946775       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 21:15:37.957841       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 21:15:43.047186       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:15:43.241673       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1017 21:15:43.368782       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:15:43.387233       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 21:16:35.491344       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:45390: use of closed network connection
	
	
	==> kube-controller-manager [15c47a452e154a98ec008666045438be3c1019e9c17cafaba1ba511509139b23] <==
	I1017 21:15:42.243774       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 21:15:42.246721       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:15:42.268507       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:15:42.269931       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:15:42.270031       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:15:42.270069       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:15:42.270101       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:15:42.270123       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:15:42.270346       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 21:15:42.270415       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1017 21:15:42.271516       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 21:15:42.271587       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:15:42.271640       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:15:42.271552       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 21:15:42.272209       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 21:15:42.272280       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:15:42.271570       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:15:42.272621       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:15:42.273736       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 21:15:42.280012       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 21:15:42.287282       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:15:42.292449       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:15:42.300706       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:15:42.322648       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:16:27.225591       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [de3c344ff14ddbc57f2fcc37fbfdd77bdfb19518e37433a54fd9a9050a169867] <==
	I1017 21:15:43.780075       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:15:43.880676       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:15:43.982293       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:15:43.982391       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:15:43.982520       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:15:44.120066       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:15:44.120281       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:15:44.132397       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:15:44.132819       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:15:44.132843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:15:44.134098       1 config.go:200] "Starting service config controller"
	I1017 21:15:44.134163       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:15:44.134208       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:15:44.134235       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:15:44.134275       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:15:44.134302       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:15:44.136702       1 config.go:309] "Starting node config controller"
	I1017 21:15:44.136771       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:15:44.136818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:15:44.235472       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:15:44.235571       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:15:44.235596       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7fd1516d656747b5f82fd97f041a5e1375329155fb91a6a2855dd7fbbf3dd27b] <==
	E1017 21:15:35.229218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:15:35.229361       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 21:15:35.229558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 21:15:35.229645       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 21:15:35.229714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:15:35.229780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:15:35.229856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 21:15:35.229960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 21:15:35.233011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 21:15:35.236496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:15:35.236678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:15:35.236777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:15:35.236955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 21:15:35.237044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:15:35.237116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 21:15:35.237210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 21:15:36.200464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:15:36.201816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:15:36.210085       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:15:36.259320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1017 21:15:36.282123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1017 21:15:36.355208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:15:36.373903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:15:36.447503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 21:15:38.714641       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:15:42 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:42.262237    1315 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 17 21:15:42 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:42.264134    1315 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.350824    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2fd2528-5232-4574-a792-3be8eca99a9d-xtables-lock\") pod \"kindnet-29xbg\" (UID: \"d2fd2528-5232-4574-a792-3be8eca99a9d\") " pod="kube-system/kindnet-29xbg"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.350947    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2fd2528-5232-4574-a792-3be8eca99a9d-lib-modules\") pod \"kindnet-29xbg\" (UID: \"d2fd2528-5232-4574-a792-3be8eca99a9d\") " pod="kube-system/kindnet-29xbg"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.350985    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mhqh\" (UniqueName: \"kubernetes.io/projected/d2fd2528-5232-4574-a792-3be8eca99a9d-kube-api-access-6mhqh\") pod \"kindnet-29xbg\" (UID: \"d2fd2528-5232-4574-a792-3be8eca99a9d\") " pod="kube-system/kindnet-29xbg"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.351035    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2c3d9c06-0fd9-448b-b4e6-872d16233b50-lib-modules\") pod \"kube-proxy-rh2gh\" (UID: \"2c3d9c06-0fd9-448b-b4e6-872d16233b50\") " pod="kube-system/kube-proxy-rh2gh"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.351055    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjpv\" (UniqueName: \"kubernetes.io/projected/2c3d9c06-0fd9-448b-b4e6-872d16233b50-kube-api-access-gkjpv\") pod \"kube-proxy-rh2gh\" (UID: \"2c3d9c06-0fd9-448b-b4e6-872d16233b50\") " pod="kube-system/kube-proxy-rh2gh"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.351081    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d2fd2528-5232-4574-a792-3be8eca99a9d-cni-cfg\") pod \"kindnet-29xbg\" (UID: \"d2fd2528-5232-4574-a792-3be8eca99a9d\") " pod="kube-system/kindnet-29xbg"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.351120    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2c3d9c06-0fd9-448b-b4e6-872d16233b50-xtables-lock\") pod \"kube-proxy-rh2gh\" (UID: \"2c3d9c06-0fd9-448b-b4e6-872d16233b50\") " pod="kube-system/kube-proxy-rh2gh"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.351143    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2c3d9c06-0fd9-448b-b4e6-872d16233b50-kube-proxy\") pod \"kube-proxy-rh2gh\" (UID: \"2c3d9c06-0fd9-448b-b4e6-872d16233b50\") " pod="kube-system/kube-proxy-rh2gh"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:43.471787    1315 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: W1017 21:15:43.627930    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-6c691e5eb9481d5e5aed8a2a443245abefa9221ee5355510dd567ada7dc691b5 WatchSource:0}: Error finding container 6c691e5eb9481d5e5aed8a2a443245abefa9221ee5355510dd567ada7dc691b5: Status 404 returned error can't find the container with id 6c691e5eb9481d5e5aed8a2a443245abefa9221ee5355510dd567ada7dc691b5
	Oct 17 21:15:43 default-k8s-diff-port-332023 kubelet[1315]: W1017 21:15:43.641421    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-2624fa9443638a65bb89cc4e43423bc53390664dd64a5508f7ba7893a265781e WatchSource:0}: Error finding container 2624fa9443638a65bb89cc4e43423bc53390664dd64a5508f7ba7893a265781e: Status 404 returned error can't find the container with id 2624fa9443638a65bb89cc4e43423bc53390664dd64a5508f7ba7893a265781e
	Oct 17 21:15:44 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:44.062229    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-29xbg" podStartSLOduration=1.062210959 podStartE2EDuration="1.062210959s" podCreationTimestamp="2025-10-17 21:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:15:44.038242817 +0000 UTC m=+6.285718348" watchObservedRunningTime="2025-10-17 21:15:44.062210959 +0000 UTC m=+6.309686466"
	Oct 17 21:15:44 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:15:44.334528    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rh2gh" podStartSLOduration=1.334510225 podStartE2EDuration="1.334510225s" podCreationTimestamp="2025-10-17 21:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:15:44.063212162 +0000 UTC m=+6.310687677" watchObservedRunningTime="2025-10-17 21:15:44.334510225 +0000 UTC m=+6.581985731"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:24.143260    1315 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:24.240856    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtwhn\" (UniqueName: \"kubernetes.io/projected/9748deef-241f-4101-a37b-e6aebe976464-kube-api-access-mtwhn\") pod \"coredns-66bc5c9577-nvmzl\" (UID: \"9748deef-241f-4101-a37b-e6aebe976464\") " pod="kube-system/coredns-66bc5c9577-nvmzl"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:24.241099    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/59ed862b-5b8f-42cd-92cd-331c3436056f-tmp\") pod \"storage-provisioner\" (UID: \"59ed862b-5b8f-42cd-92cd-331c3436056f\") " pod="kube-system/storage-provisioner"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:24.241205    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrplr\" (UniqueName: \"kubernetes.io/projected/59ed862b-5b8f-42cd-92cd-331c3436056f-kube-api-access-zrplr\") pod \"storage-provisioner\" (UID: \"59ed862b-5b8f-42cd-92cd-331c3436056f\") " pod="kube-system/storage-provisioner"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:24.241280    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9748deef-241f-4101-a37b-e6aebe976464-config-volume\") pod \"coredns-66bc5c9577-nvmzl\" (UID: \"9748deef-241f-4101-a37b-e6aebe976464\") " pod="kube-system/coredns-66bc5c9577-nvmzl"
	Oct 17 21:16:24 default-k8s-diff-port-332023 kubelet[1315]: W1017 21:16:24.588018    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-d6b7bf2c00408bf69fad0061f04a4782a2d4c9b032118e004b69c465673f75f9 WatchSource:0}: Error finding container d6b7bf2c00408bf69fad0061f04a4782a2d4c9b032118e004b69c465673f75f9: Status 404 returned error can't find the container with id d6b7bf2c00408bf69fad0061f04a4782a2d4c9b032118e004b69c465673f75f9
	Oct 17 21:16:25 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:25.176638    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nvmzl" podStartSLOduration=42.176619593 podStartE2EDuration="42.176619593s" podCreationTimestamp="2025-10-17 21:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:16:25.145093545 +0000 UTC m=+47.392569060" watchObservedRunningTime="2025-10-17 21:16:25.176619593 +0000 UTC m=+47.424095108"
	Oct 17 21:16:25 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:25.199609    1315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.199593481 podStartE2EDuration="42.199593481s" podCreationTimestamp="2025-10-17 21:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:16:25.17893928 +0000 UTC m=+47.426414828" watchObservedRunningTime="2025-10-17 21:16:25.199593481 +0000 UTC m=+47.447068988"
	Oct 17 21:16:27 default-k8s-diff-port-332023 kubelet[1315]: I1017 21:16:27.368117    1315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r88cq\" (UniqueName: \"kubernetes.io/projected/cd3171c1-800a-494a-9758-08a92ae10d3c-kube-api-access-r88cq\") pod \"busybox\" (UID: \"cd3171c1-800a-494a-9758-08a92ae10d3c\") " pod="default/busybox"
	Oct 17 21:16:27 default-k8s-diff-port-332023 kubelet[1315]: W1017 21:16:27.670810    1315 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0 WatchSource:0}: Error finding container 16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0: Status 404 returned error can't find the container with id 16d3513dec0439aeb315654df2c8d47f9201cd6b20cc4c7c590b28fb54b48eb0
	
	
	==> storage-provisioner [879d1fe3a361cde83a6ab613aad8baeeb84235d26d9b46e404e57c105881ee86] <==
	I1017 21:16:24.626565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:16:24.732855       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:16:24.732964       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:16:24.736426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:24.747892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:16:24.748106       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:16:24.748553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_a718130c-1a37-4744-b885-0e46c3a4e89f!
	I1017 21:16:24.791714       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80975992-6b56-4221-9d62-c0a1d9481647", APIVersion:"v1", ResourceVersion:"456", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-332023_a718130c-1a37-4744-b885-0e46c3a4e89f became leader
	W1017 21:16:24.818584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:24.823913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:16:24.849877       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_a718130c-1a37-4744-b885-0e46c3a4e89f!
	W1017 21:16:26.826763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:26.832241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:28.835603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:28.842611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:30.845615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:30.852540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:32.855794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:32.860904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:34.864259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:34.868920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:36.874089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:16:36.894312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-629583 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-629583 --alsologtostderr -v=1: exit status 80 (2.328878629s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-629583 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 21:17:14.345062  829375 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:17:14.345263  829375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:14.345290  829375 out.go:374] Setting ErrFile to fd 2...
	I1017 21:17:14.345309  829375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:14.345617  829375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:17:14.345916  829375 out.go:368] Setting JSON to false
	I1017 21:17:14.345964  829375 mustload.go:65] Loading cluster: embed-certs-629583
	I1017 21:17:14.346478  829375 config.go:182] Loaded profile config "embed-certs-629583": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:14.347180  829375 cli_runner.go:164] Run: docker container inspect embed-certs-629583 --format={{.State.Status}}
	I1017 21:17:14.365617  829375 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:17:14.365928  829375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:14.430918  829375 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 21:17:14.420781483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:14.431623  829375 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-629583 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 21:17:14.435440  829375 out.go:179] * Pausing node embed-certs-629583 ... 
	I1017 21:17:14.438453  829375 host.go:66] Checking if "embed-certs-629583" exists ...
	I1017 21:17:14.438810  829375 ssh_runner.go:195] Run: systemctl --version
	I1017 21:17:14.438861  829375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-629583
	I1017 21:17:14.460480  829375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33854 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/embed-certs-629583/id_rsa Username:docker}
	I1017 21:17:14.570019  829375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:17:14.589710  829375 pause.go:52] kubelet running: true
	I1017 21:17:14.589785  829375 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:17:14.931708  829375 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:17:14.931800  829375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:17:15.069962  829375 cri.go:89] found id: "80ca067c7617a34941511a2c6d8b81514e673e55ed6ad60b8c7bd37c4783280c"
	I1017 21:17:15.069991  829375 cri.go:89] found id: "2a2555eb03e49c384cb5cdfff4f5e1c95b87ec94104bcbf346d1410e7ef452c0"
	I1017 21:17:15.069997  829375 cri.go:89] found id: "17885c6005baef49209583e7551e55679f0e578cbfde4c129f765f29985927da"
	I1017 21:17:15.070000  829375 cri.go:89] found id: "417b3b3922976d0c223b2a75f1f19f847ad7152221fd7077a44ee3c4c849f25b"
	I1017 21:17:15.070003  829375 cri.go:89] found id: "fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be"
	I1017 21:17:15.070015  829375 cri.go:89] found id: "d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3"
	I1017 21:17:15.070019  829375 cri.go:89] found id: "0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0"
	I1017 21:17:15.070023  829375 cri.go:89] found id: "68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004"
	I1017 21:17:15.070026  829375 cri.go:89] found id: "024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b"
	I1017 21:17:15.070035  829375 cri.go:89] found id: "1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db"
	I1017 21:17:15.070040  829375 cri.go:89] found id: "7fc3b7613495f459fd4fbc98d8a4f6fc15bd14dd358c1b58365ee9a9707f278f"
	I1017 21:17:15.070044  829375 cri.go:89] found id: ""
	I1017 21:17:15.070107  829375 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:17:15.083861  829375 retry.go:31] will retry after 151.233008ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:17:15Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:17:15.236315  829375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:17:15.272943  829375 pause.go:52] kubelet running: false
	I1017 21:17:15.273007  829375 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:17:15.517515  829375 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:17:15.517599  829375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:17:15.612079  829375 cri.go:89] found id: "80ca067c7617a34941511a2c6d8b81514e673e55ed6ad60b8c7bd37c4783280c"
	I1017 21:17:15.612105  829375 cri.go:89] found id: "2a2555eb03e49c384cb5cdfff4f5e1c95b87ec94104bcbf346d1410e7ef452c0"
	I1017 21:17:15.612111  829375 cri.go:89] found id: "17885c6005baef49209583e7551e55679f0e578cbfde4c129f765f29985927da"
	I1017 21:17:15.612115  829375 cri.go:89] found id: "417b3b3922976d0c223b2a75f1f19f847ad7152221fd7077a44ee3c4c849f25b"
	I1017 21:17:15.612118  829375 cri.go:89] found id: "fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be"
	I1017 21:17:15.612125  829375 cri.go:89] found id: "d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3"
	I1017 21:17:15.612129  829375 cri.go:89] found id: "0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0"
	I1017 21:17:15.612132  829375 cri.go:89] found id: "68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004"
	I1017 21:17:15.612135  829375 cri.go:89] found id: "024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b"
	I1017 21:17:15.612141  829375 cri.go:89] found id: "1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db"
	I1017 21:17:15.612144  829375 cri.go:89] found id: "7fc3b7613495f459fd4fbc98d8a4f6fc15bd14dd358c1b58365ee9a9707f278f"
	I1017 21:17:15.612147  829375 cri.go:89] found id: ""
	I1017 21:17:15.612206  829375 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:17:15.630971  829375 retry.go:31] will retry after 534.495889ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:17:15Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:17:16.166281  829375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:17:16.184061  829375 pause.go:52] kubelet running: false
	I1017 21:17:16.184133  829375 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:17:16.425836  829375 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:17:16.425924  829375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:17:16.548415  829375 cri.go:89] found id: "80ca067c7617a34941511a2c6d8b81514e673e55ed6ad60b8c7bd37c4783280c"
	I1017 21:17:16.548438  829375 cri.go:89] found id: "2a2555eb03e49c384cb5cdfff4f5e1c95b87ec94104bcbf346d1410e7ef452c0"
	I1017 21:17:16.548443  829375 cri.go:89] found id: "17885c6005baef49209583e7551e55679f0e578cbfde4c129f765f29985927da"
	I1017 21:17:16.548447  829375 cri.go:89] found id: "417b3b3922976d0c223b2a75f1f19f847ad7152221fd7077a44ee3c4c849f25b"
	I1017 21:17:16.548455  829375 cri.go:89] found id: "fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be"
	I1017 21:17:16.548459  829375 cri.go:89] found id: "d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3"
	I1017 21:17:16.548462  829375 cri.go:89] found id: "0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0"
	I1017 21:17:16.548466  829375 cri.go:89] found id: "68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004"
	I1017 21:17:16.548476  829375 cri.go:89] found id: "024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b"
	I1017 21:17:16.548483  829375 cri.go:89] found id: "1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db"
	I1017 21:17:16.548490  829375 cri.go:89] found id: "7fc3b7613495f459fd4fbc98d8a4f6fc15bd14dd358c1b58365ee9a9707f278f"
	I1017 21:17:16.548493  829375 cri.go:89] found id: ""
	I1017 21:17:16.548541  829375 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:17:16.574491  829375 out.go:203] 
	W1017 21:17:16.578986  829375 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:17:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 21:17:16.579218  829375 out.go:285] * 
	* 
	W1017 21:17:16.587799  829375 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 21:17:16.592310  829375 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-629583 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629583
helpers_test.go:243: (dbg) docker inspect embed-certs-629583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	        "Created": "2025-10-17T21:14:19.780499873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 824372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:16:11.344055227Z",
	            "FinishedAt": "2025-10-17T21:16:10.468061507Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hosts",
	        "LogPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa-json.log",
	        "Name": "/embed-certs-629583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	                "LowerDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629583",
	                "Source": "/var/lib/docker/volumes/embed-certs-629583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629583",
	                "name.minikube.sigs.k8s.io": "embed-certs-629583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b41ca094c135716ccb7b4e9571c1921cb94e29cd721f72d77de041d88b3c1d2",
	            "SandboxKey": "/var/run/docker/netns/6b41ca094c13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:07:14:cd:5d:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cf73a7bb458977ed299c7ce9cbca11369f8601f7b17d9b0ba6519ff0a5d4f48",
	                    "EndpointID": "0e0766a39347cdbdd18c76e7866b61bc295bab29e2114eaabae2e8d6bd3220b6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629583",
	                        "792e6eed90d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583: exit status 2 (481.012076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25: (1.904566458s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                              │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                          │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                               │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                              │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                     │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                     │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                          │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                             │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                              │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                             │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:16:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:16:51.238015  827198 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:16:51.238135  827198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:51.238147  827198 out.go:374] Setting ErrFile to fd 2...
	I1017 21:16:51.238152  827198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:51.238416  827198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:16:51.238827  827198 out.go:368] Setting JSON to false
	I1017 21:16:51.239852  827198 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14357,"bootTime":1760721454,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:16:51.239921  827198 start.go:141] virtualization:  
	I1017 21:16:51.243439  827198 out.go:179] * [default-k8s-diff-port-332023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:16:51.247430  827198 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:16:51.247540  827198 notify.go:220] Checking for updates...
	I1017 21:16:51.253393  827198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:16:51.256398  827198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:51.259377  827198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:16:51.262229  827198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:16:51.265170  827198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:16:51.268555  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:51.269123  827198 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:16:51.293623  827198 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:16:51.293735  827198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:51.355761  827198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:51.345768005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:51.355886  827198 docker.go:318] overlay module found
	I1017 21:16:51.359188  827198 out.go:179] * Using the docker driver based on existing profile
	I1017 21:16:51.362118  827198 start.go:305] selected driver: docker
	I1017 21:16:51.362135  827198 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:51.362240  827198 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:16:51.362981  827198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:51.414582  827198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:51.404656432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:51.414950  827198 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:16:51.414992  827198 cni.go:84] Creating CNI manager for ""
	I1017 21:16:51.415060  827198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:51.415133  827198 start.go:349] cluster config:
	{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:51.420009  827198 out.go:179] * Starting "default-k8s-diff-port-332023" primary control-plane node in "default-k8s-diff-port-332023" cluster
	I1017 21:16:51.422846  827198 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:16:51.425793  827198 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:16:51.428618  827198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:51.428683  827198 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:16:51.428693  827198 cache.go:58] Caching tarball of preloaded images
	I1017 21:16:51.428779  827198 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:16:51.428789  827198 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:16:51.428900  827198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:16:51.429223  827198 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:16:51.453285  827198 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:16:51.453306  827198 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:16:51.453328  827198 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:16:51.453352  827198 start.go:360] acquireMachinesLock for default-k8s-diff-port-332023: {Name:mkd5f10687dc08061f4c474fbb408a2c8ae57413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:16:51.453416  827198 start.go:364] duration metric: took 46.089µs to acquireMachinesLock for "default-k8s-diff-port-332023"
	I1017 21:16:51.453462  827198 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:16:51.453469  827198 fix.go:54] fixHost starting: 
	I1017 21:16:51.453722  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:51.470517  827198 fix.go:112] recreateIfNeeded on default-k8s-diff-port-332023: state=Stopped err=<nil>
	W1017 21:16:51.470560  827198 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 21:16:51.601876  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:54.101268  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:16:51.473506  827198 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-332023" ...
	I1017 21:16:51.473588  827198 cli_runner.go:164] Run: docker start default-k8s-diff-port-332023
	I1017 21:16:51.748063  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:51.773071  827198 kic.go:430] container "default-k8s-diff-port-332023" state is running.
	I1017 21:16:51.773747  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:51.799093  827198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:16:51.799462  827198 machine.go:93] provisionDockerMachine start ...
	I1017 21:16:51.799527  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:51.817245  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:51.817679  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:51.817721  827198 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:16:51.818489  827198 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:16:54.970818  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:16:54.970853  827198 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-332023"
	I1017 21:16:54.970965  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:54.988483  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:54.988806  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:54.988821  827198 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-332023 && echo "default-k8s-diff-port-332023" | sudo tee /etc/hostname
	I1017 21:16:55.153680  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:16:55.153837  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.171204  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:55.171520  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:55.171545  827198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-332023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-332023/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-332023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:16:55.319597  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:16:55.319681  827198 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:16:55.319710  827198 ubuntu.go:190] setting up certificates
	I1017 21:16:55.319720  827198 provision.go:84] configureAuth start
	I1017 21:16:55.319801  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:55.341686  827198 provision.go:143] copyHostCerts
	I1017 21:16:55.341758  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:16:55.341779  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:16:55.341863  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:16:55.341974  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:16:55.341985  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:16:55.342011  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:16:55.342085  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:16:55.342094  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:16:55.342117  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:16:55.342185  827198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-332023 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-332023 localhost minikube]
	I1017 21:16:55.475885  827198 provision.go:177] copyRemoteCerts
	I1017 21:16:55.475958  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:16:55.476031  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.494966  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:55.600410  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:16:55.621151  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 21:16:55.639005  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:16:55.659428  827198 provision.go:87] duration metric: took 339.684866ms to configureAuth
	I1017 21:16:55.659455  827198 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:16:55.659648  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:55.659757  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.677279  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:55.677589  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:55.677612  827198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:16:56.014121  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:16:56.014148  827198 machine.go:96] duration metric: took 4.214672332s to provisionDockerMachine
	I1017 21:16:56.014176  827198 start.go:293] postStartSetup for "default-k8s-diff-port-332023" (driver="docker")
	I1017 21:16:56.014188  827198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:16:56.014266  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:16:56.014342  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.036923  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.143296  827198 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:16:56.146606  827198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:16:56.146636  827198 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:16:56.146647  827198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:16:56.146759  827198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:16:56.146846  827198 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:16:56.146957  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:16:56.154362  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:56.172055  827198 start.go:296] duration metric: took 157.863247ms for postStartSetup
	I1017 21:16:56.172146  827198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:16:56.172192  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.189860  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.296073  827198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:16:56.300603  827198 fix.go:56] duration metric: took 4.84712682s for fixHost
	I1017 21:16:56.300629  827198 start.go:83] releasing machines lock for "default-k8s-diff-port-332023", held for 4.847202406s
	I1017 21:16:56.300698  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:56.317268  827198 ssh_runner.go:195] Run: cat /version.json
	I1017 21:16:56.317325  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.317386  827198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:16:56.317587  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.334815  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.343278  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.552681  827198 ssh_runner.go:195] Run: systemctl --version
	I1017 21:16:56.559431  827198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:16:56.596376  827198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:16:56.601377  827198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:16:56.601464  827198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:16:56.609398  827198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:16:56.609423  827198 start.go:495] detecting cgroup driver to use...
	I1017 21:16:56.609475  827198 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:16:56.609552  827198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:16:56.625165  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:16:56.637995  827198 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:16:56.638124  827198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:16:56.653681  827198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:16:56.667194  827198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:16:56.792584  827198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:16:56.921981  827198 docker.go:234] disabling docker service ...
	I1017 21:16:56.922057  827198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:16:56.938129  827198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:16:56.952044  827198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:16:57.075449  827198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:16:57.248970  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:16:57.264020  827198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:16:57.278958  827198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:16:57.279113  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.291954  827198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:16:57.292078  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.302646  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.315696  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.325589  827198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:16:57.334015  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.342814  827198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.352511  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.362077  827198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:16:57.371057  827198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:16:57.379433  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:57.509531  827198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:16:57.656532  827198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:16:57.656636  827198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:16:57.660902  827198 start.go:563] Will wait 60s for crictl version
	I1017 21:16:57.660997  827198 ssh_runner.go:195] Run: which crictl
	I1017 21:16:57.664783  827198 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:16:57.690259  827198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:16:57.690401  827198 ssh_runner.go:195] Run: crio --version
	I1017 21:16:57.718318  827198 ssh_runner.go:195] Run: crio --version
	I1017 21:16:57.755649  827198 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:16:57.758586  827198 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-332023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:16:57.774256  827198 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:16:57.777973  827198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:57.787392  827198 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:16:57.787512  827198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:57.787565  827198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:57.818896  827198 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:57.818922  827198 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:16:57.818977  827198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:57.845290  827198 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:57.845311  827198 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:16:57.845319  827198 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 21:16:57.845472  827198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-332023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:16:57.845581  827198 ssh_runner.go:195] Run: crio config
	I1017 21:16:57.916262  827198 cni.go:84] Creating CNI manager for ""
	I1017 21:16:57.916282  827198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:57.916304  827198 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:16:57.916339  827198 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-332023 NodeName:default-k8s-diff-port-332023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:16:57.916463  827198 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-332023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:16:57.916551  827198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:16:57.924361  827198 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:16:57.924483  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:16:57.932065  827198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 21:16:57.945693  827198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:16:57.958634  827198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 21:16:57.972380  827198 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:16:57.976491  827198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:57.986130  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:58.103617  827198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:58.126545  827198 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023 for IP: 192.168.85.2
	I1017 21:16:58.126608  827198 certs.go:195] generating shared ca certs ...
	I1017 21:16:58.126638  827198 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:58.126809  827198 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:16:58.126875  827198 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:16:58.126896  827198 certs.go:257] generating profile certs ...
	I1017 21:16:58.127024  827198 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.key
	I1017 21:16:58.127156  827198 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414
	I1017 21:16:58.127235  827198 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key
	I1017 21:16:58.127377  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:16:58.127439  827198 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:16:58.127464  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:16:58.127522  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:16:58.127572  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:16:58.127637  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:16:58.127710  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:58.128370  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:16:58.156088  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:16:58.177435  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:16:58.199855  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:16:58.221849  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 21:16:58.308948  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:16:58.356757  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:16:58.380333  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:16:58.400581  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:16:58.419902  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:16:58.439835  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:16:58.458620  827198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:16:58.481875  827198 ssh_runner.go:195] Run: openssl version
	I1017 21:16:58.488419  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:16:58.496700  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.500691  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.500801  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.542151  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:16:58.551009  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:16:58.559868  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.563395  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.563460  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.605374  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:16:58.613391  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:16:58.621983  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.625919  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.626032  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.668612  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:16:58.676944  827198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:16:58.681376  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:16:58.732477  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:16:58.774893  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:16:58.820418  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:16:58.881595  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:16:58.947467  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:16:59.035284  827198 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:59.035375  827198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:16:59.035517  827198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:16:59.088983  827198 cri.go:89] found id: "3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b"
	I1017 21:16:59.089007  827198 cri.go:89] found id: "dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7"
	I1017 21:16:59.089015  827198 cri.go:89] found id: "3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18"
	I1017 21:16:59.089019  827198 cri.go:89] found id: "da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5"
	I1017 21:16:59.089058  827198 cri.go:89] found id: ""
	I1017 21:16:59.089141  827198 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:16:59.105866  827198 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:16:59Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:16:59.105978  827198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:16:59.118692  827198 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:16:59.118713  827198 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:16:59.118795  827198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:16:59.129842  827198 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:16:59.130743  827198 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-332023" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:59.131403  827198 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-332023" cluster setting kubeconfig missing "default-k8s-diff-port-332023" context setting]
	I1017 21:16:59.132345  827198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.134305  827198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:16:59.144745  827198 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 21:16:59.144806  827198 kubeadm.go:601] duration metric: took 26.069063ms to restartPrimaryControlPlane
	I1017 21:16:59.144826  827198 kubeadm.go:402] duration metric: took 109.555249ms to StartCluster
	I1017 21:16:59.144844  827198 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.144920  827198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:59.146427  827198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.146723  827198 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:16:59.147113  827198 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:16:59.147186  827198 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.147200  827198 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.147206  827198 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:16:59.147231  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.147406  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:59.147447  827198 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.147456  827198 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.147462  827198 addons.go:247] addon dashboard should already be in state true
	I1017 21:16:59.147479  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.147690  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.148116  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.148424  827198 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.148451  827198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-332023"
	I1017 21:16:59.148726  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.154165  827198 out.go:179] * Verifying Kubernetes components...
	I1017 21:16:59.157699  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:59.200539  827198 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:16:59.203584  827198 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:59.203608  827198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:16:59.203671  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.212715  827198 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.212801  827198 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:16:59.212842  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.213346  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.223515  827198 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:16:59.230140  827198 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1017 21:16:56.101592  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:58.112215  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:17:00.606512  824247 pod_ready.go:94] pod "coredns-66bc5c9577-7c4gn" is "Ready"
	I1017 21:17:00.606540  824247 pod_ready.go:86] duration metric: took 33.011183013s for pod "coredns-66bc5c9577-7c4gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.613423  824247 pod_ready.go:83] waiting for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.620942  824247 pod_ready.go:94] pod "etcd-embed-certs-629583" is "Ready"
	I1017 21:17:00.621019  824247 pod_ready.go:86] duration metric: took 7.570186ms for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.624552  824247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.632724  824247 pod_ready.go:94] pod "kube-apiserver-embed-certs-629583" is "Ready"
	I1017 21:17:00.632799  824247 pod_ready.go:86] duration metric: took 8.16792ms for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.636053  824247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.800479  824247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629583" is "Ready"
	I1017 21:17:00.800571  824247 pod_ready.go:86] duration metric: took 164.442111ms for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.000101  824247 pod_ready.go:83] waiting for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:59.233125  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:16:59.233151  827198 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:16:59.233227  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.267665  827198 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:59.267686  827198 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:16:59.267746  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.275288  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.307363  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.313831  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.525834  827198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:59.539408  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:59.603053  827198 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:16:59.604710  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:16:59.604735  827198 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:16:59.636555  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:59.691677  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:16:59.691703  827198 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:16:59.751815  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:16:59.751855  827198 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:16:59.825521  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:16:59.825553  827198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:16:59.869098  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:16:59.869129  827198 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:16:59.897734  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:16:59.897760  827198 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:16:59.928295  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:16:59.928336  827198 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:16:59.950861  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:16:59.950887  827198 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:16:59.973826  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:16:59.973858  827198 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:16:59.996443  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:17:01.399986  824247 pod_ready.go:94] pod "kube-proxy-p98l2" is "Ready"
	I1017 21:17:01.400066  824247 pod_ready.go:86] duration metric: took 399.876003ms for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.599622  824247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.999719  824247 pod_ready.go:94] pod "kube-scheduler-embed-certs-629583" is "Ready"
	I1017 21:17:01.999796  824247 pod_ready.go:86] duration metric: took 400.085877ms for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.999823  824247 pod_ready.go:40] duration metric: took 34.411466806s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:02.099638  824247 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:17:02.102860  824247 out.go:179] * Done! kubectl is now configured to use "embed-certs-629583" cluster and "default" namespace by default
	I1017 21:17:04.325105  827198 node_ready.go:49] node "default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:04.325133  827198 node_ready.go:38] duration metric: took 4.722049715s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:17:04.325169  827198 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:17:04.325247  827198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:17:04.506493  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.967033904s)
	I1017 21:17:05.854666  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.218055461s)
	I1017 21:17:05.890212  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.893725094s)
	I1017 21:17:05.890325  827198 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.565066386s)
	I1017 21:17:05.890425  827198 api_server.go:72] duration metric: took 6.743669909s to wait for apiserver process to appear ...
	I1017 21:17:05.890434  827198 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:17:05.890459  827198 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 21:17:05.894752  827198 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-332023 addons enable metrics-server
	
	I1017 21:17:05.897856  827198 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1017 21:17:05.900241  827198 addons.go:514] duration metric: took 6.753124377s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1017 21:17:05.911865  827198 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:17:05.911950  827198 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:17:06.390582  827198 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 21:17:06.400257  827198 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 21:17:06.401787  827198 api_server.go:141] control plane version: v1.34.1
	I1017 21:17:06.401819  827198 api_server.go:131] duration metric: took 511.376695ms to wait for apiserver health ...
	I1017 21:17:06.401829  827198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:17:06.405904  827198 system_pods.go:59] 8 kube-system pods found
	I1017 21:17:06.405992  827198 system_pods.go:61] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:17:06.406026  827198 system_pods.go:61] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:17:06.406039  827198 system_pods.go:61] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:17:06.406048  827198 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:17:06.406056  827198 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:17:06.406062  827198 system_pods.go:61] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:17:06.406078  827198 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:17:06.406082  827198 system_pods.go:61] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Running
	I1017 21:17:06.406088  827198 system_pods.go:74] duration metric: took 4.253057ms to wait for pod list to return data ...
	I1017 21:17:06.406096  827198 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:17:06.409455  827198 default_sa.go:45] found service account: "default"
	I1017 21:17:06.409486  827198 default_sa.go:55] duration metric: took 3.378897ms for default service account to be created ...
	I1017 21:17:06.409500  827198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:17:06.413245  827198 system_pods.go:86] 8 kube-system pods found
	I1017 21:17:06.413281  827198 system_pods.go:89] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:17:06.413292  827198 system_pods.go:89] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:17:06.413327  827198 system_pods.go:89] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:17:06.413335  827198 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:17:06.413343  827198 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:17:06.413355  827198 system_pods.go:89] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:17:06.413362  827198 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:17:06.413367  827198 system_pods.go:89] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Running
	I1017 21:17:06.413374  827198 system_pods.go:126] duration metric: took 3.868799ms to wait for k8s-apps to be running ...
	I1017 21:17:06.413391  827198 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:17:06.413454  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:17:06.427621  827198 system_svc.go:56] duration metric: took 14.214772ms WaitForService to wait for kubelet
	I1017 21:17:06.427662  827198 kubeadm.go:586] duration metric: took 7.280907525s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:17:06.427720  827198 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:17:06.431049  827198 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:17:06.431172  827198 node_conditions.go:123] node cpu capacity is 2
	I1017 21:17:06.431194  827198 node_conditions.go:105] duration metric: took 3.461598ms to run NodePressure ...
	I1017 21:17:06.431210  827198 start.go:241] waiting for startup goroutines ...
	I1017 21:17:06.431241  827198 start.go:246] waiting for cluster config update ...
	I1017 21:17:06.431258  827198 start.go:255] writing updated cluster config ...
	I1017 21:17:06.431558  827198 ssh_runner.go:195] Run: rm -f paused
	I1017 21:17:06.435811  827198 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:06.440356  827198 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 21:17:08.447599  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:10.447686  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:12.949910  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:15.447467  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.17441779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178350474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178511477Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178592389Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182435989Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182598256Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182687611Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187658568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187698167Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187724317Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.191791181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.191832979Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.877409143Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ad6bb547-8ccb-48c4-bd84-1169586b0323 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.879794756Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6257001a-6d1b-41bd-8ea7-c0de75fb9787 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.881333523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=bb58099b-d55d-4def-b67f-7951bdbf3b85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.881880549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.914859828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.91559943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.018180944Z" level=info msg="Created container 1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=bb58099b-d55d-4def-b67f-7951bdbf3b85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.023349394Z" level=info msg="Starting container: 1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db" id=4e977bf6-3176-401b-b6e0-749b1682fd08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.034482553Z" level=info msg="Started container" PID=1731 containerID=1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper id=4e977bf6-3176-401b-b6e0-749b1682fd08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b0d63524e6ccd1725064fb811e513176c199c45f78dde2d2e39edfde85719a5
	Oct 17 21:17:14 embed-certs-629583 conmon[1729]: conmon 1961abb57bee650bdee8 <ninfo>: container 1731 exited with status 1
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.162054225Z" level=info msg="Removing container: 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.185149312Z" level=info msg="Error loading conmon cgroup of container 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d: cgroup deleted" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.195599656Z" level=info msg="Removed container 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1961abb57bee6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   3b0d63524e6cc       dashboard-metrics-scraper-6ffb444bf9-4rn7g   kubernetes-dashboard
	80ca067c7617a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           20 seconds ago      Running             storage-provisioner         2                   2dda121d69624       storage-provisioner                          kube-system
	7fc3b7613495f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago      Running             kubernetes-dashboard        0                   2ad7b43df11e5       kubernetes-dashboard-855c9754f9-j59qn        kubernetes-dashboard
	e6bdde4121a84       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           52 seconds ago      Running             busybox                     1                   ac989fc53d4ed       busybox                                      default
	2a2555eb03e49       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           52 seconds ago      Running             kube-proxy                  1                   767d6589b7e06       kube-proxy-p98l2                             kube-system
	17885c6005bae       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           52 seconds ago      Running             coredns                     1                   a972881142a11       coredns-66bc5c9577-7c4gn                     kube-system
	417b3b3922976       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   2d6d2e3052784       kindnet-tqd9k                                kube-system
	fb31580516aa3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           52 seconds ago      Exited              storage-provisioner         1                   2dda121d69624       storage-provisioner                          kube-system
	d0a52582cdef3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           58 seconds ago      Running             kube-apiserver              1                   641db263ccaf0       kube-apiserver-embed-certs-629583            kube-system
	0565e636fbd6f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           58 seconds ago      Running             kube-scheduler              1                   9f56a4c0ef000       kube-scheduler-embed-certs-629583            kube-system
	68303ee075b96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           58 seconds ago      Running             etcd                        1                   609a054cad244       etcd-embed-certs-629583                      kube-system
	024a29d84fb3e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           58 seconds ago      Running             kube-controller-manager     1                   47159af6c028c       kube-controller-manager-embed-certs-629583   kube-system
	
	
	==> coredns [17885c6005baef49209583e7551e55679f0e578cbfde4c129f765f29985927da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41762 - 11137 "HINFO IN 3775923736227160177.3505267854412391022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012305261s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-629583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-629583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_14_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:14:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629583
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:15:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c9c2881c-9e92-49bc-ace3-9a4a72830c65
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-7c4gn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m26s
	  kube-system                 etcd-embed-certs-629583                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m31s
	  kube-system                 kindnet-tqd9k                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m26s
	  kube-system                 kube-apiserver-embed-certs-629583             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-controller-manager-embed-certs-629583    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 kube-proxy-p98l2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-scheduler-embed-certs-629583             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4rn7g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j59qn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m25s                  kube-proxy       
	  Normal   Starting                 50s                    kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m39s (x8 over 2m39s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m31s                  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m31s                  kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s                  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m27s                  node-controller  Node embed-certs-629583 event: Registered Node embed-certs-629583 in Controller
	  Normal   NodeReady                105s                   kubelet          Node embed-certs-629583 status is now: NodeReady
	  Normal   Starting                 60s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 60s)      kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 60s)      kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                    node-controller  Node embed-certs-629583 event: Registered Node embed-certs-629583 in Controller
	
	
	==> dmesg <==
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004] <==
	{"level":"warn","ts":"2025-10-17T21:16:22.875703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.885931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.897083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.944388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.980473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.041050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.048055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.093160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.118450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.141625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.156894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.178709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.193240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.212496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.233296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.247039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.277766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.295358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.312803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.335899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.376839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.438675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.443859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.467868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.570990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:17:18 up  3:59,  0 user,  load average: 4.35, 3.71, 3.25
	Linux embed-certs-629583 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [417b3b3922976d0c223b2a75f1f19f847ad7152221fd7077a44ee3c4c849f25b] <==
	I1017 21:16:25.947925       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:16:25.950807       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:16:25.950927       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:16:25.950939       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:16:25.950952       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:16:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:16:26.165911       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:16:26.165979       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:16:26.166012       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:16:26.166770       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:16:56.166718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:16:56.166931       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:16:56.167066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:16:56.167236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 21:16:57.567089       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:16:57.567227       1 metrics.go:72] Registering metrics
	I1017 21:16:57.567309       1 controller.go:711] "Syncing nftables rules"
	I1017 21:17:06.169631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:17:06.169742       1 main.go:301] handling current node
	I1017 21:17:16.166038       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:17:16.166148       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3] <==
	I1017 21:16:25.051790       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:16:25.059533       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:16:25.092150       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 21:16:25.092269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:16:25.111852       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:16:25.112313       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:16:25.117696       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:16:25.139158       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:16:25.459322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:16:25.142393       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:16:25.142489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:16:25.157357       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:16:25.197006       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:16:25.438567       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 21:16:25.610719       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:16:25.667387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:16:26.163979       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:16:26.380612       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:16:26.458777       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:16:26.496883       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:16:26.688818       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.24.169"}
	I1017 21:16:26.734288       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.90.202"}
	I1017 21:16:29.229895       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:16:29.627571       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:16:29.728668       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b] <==
	I1017 21:16:29.240816       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:16:29.246062       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:16:29.249279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:16:29.260730       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 21:16:29.260826       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 21:16:29.260861       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 21:16:29.260867       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 21:16:29.260873       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:16:29.263886       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:16:29.264004       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:16:29.264088       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-629583"
	I1017 21:16:29.264132       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:16:29.267066       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:16:29.267224       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 21:16:29.271784       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:16:29.271877       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 21:16:29.271914       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 21:16:29.272107       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:16:29.272433       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:16:29.272494       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:16:29.273013       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:16:29.279648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:16:29.279675       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:16:29.279683       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:16:29.285245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2a2555eb03e49c384cb5cdfff4f5e1c95b87ec94104bcbf346d1410e7ef452c0] <==
	I1017 21:16:26.561957       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:16:27.113205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:16:27.239580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:16:27.239732       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:16:27.239849       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:16:27.416806       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:16:27.422019       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:16:27.428623       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:16:27.428984       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:16:27.428999       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:16:27.430079       1 config.go:200] "Starting service config controller"
	I1017 21:16:27.430122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:16:27.433775       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:16:27.433854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:16:27.433897       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:16:27.433925       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:16:27.438843       1 config.go:309] "Starting node config controller"
	I1017 21:16:27.438936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:16:27.438968       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:16:27.530889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:16:27.534227       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:16:27.534260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0] <==
	I1017 21:16:25.873281       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:16:28.331183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:16:28.331212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:16:28.337006       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:16:28.337105       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:16:28.337167       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.337199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.337240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:16:28.337270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:16:28.337537       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:16:28.337620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:16:28.437304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.437326       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:16:28.437357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:16:30 embed-certs-629583 kubelet[776]: W1017 21:16:30.266881     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/crio-2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1 WatchSource:0}: Error finding container 2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1: Status 404 returned error can't find the container with id 2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1
	Oct 17 21:16:30 embed-certs-629583 kubelet[776]: I1017 21:16:30.499864     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 21:16:35 embed-certs-629583 kubelet[776]: I1017 21:16:35.018195     776 scope.go:117] "RemoveContainer" containerID="0169d7167d27a1fc95d8dd44030834db0f5f929505e43474f1e787f476aa48b8"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: I1017 21:16:36.026362     776 scope.go:117] "RemoveContainer" containerID="0169d7167d27a1fc95d8dd44030834db0f5f929505e43474f1e787f476aa48b8"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: I1017 21:16:36.026773     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: E1017 21:16:36.026951     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:37 embed-certs-629583 kubelet[776]: I1017 21:16:37.031669     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:37 embed-certs-629583 kubelet[776]: E1017 21:16:37.031829     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:40 embed-certs-629583 kubelet[776]: I1017 21:16:40.203142     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:40 embed-certs-629583 kubelet[776]: E1017 21:16:40.203881     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:50 embed-certs-629583 kubelet[776]: I1017 21:16:50.876517     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:51 embed-certs-629583 kubelet[776]: I1017 21:16:51.079917     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: I1017 21:16:52.087791     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: E1017 21:16:52.088769     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: I1017 21:16:52.116098     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j59qn" podStartSLOduration=12.459569211 podStartE2EDuration="23.115766806s" podCreationTimestamp="2025-10-17 21:16:29 +0000 UTC" firstStartedPulling="2025-10-17 21:16:30.269774233 +0000 UTC m=+11.539106448" lastFinishedPulling="2025-10-17 21:16:40.92597182 +0000 UTC m=+22.195304043" observedRunningTime="2025-10-17 21:16:41.073876845 +0000 UTC m=+22.343209076" watchObservedRunningTime="2025-10-17 21:16:52.115766806 +0000 UTC m=+33.385099029"
	Oct 17 21:16:57 embed-certs-629583 kubelet[776]: I1017 21:16:57.100723     776 scope.go:117] "RemoveContainer" containerID="fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be"
	Oct 17 21:17:00 embed-certs-629583 kubelet[776]: I1017 21:17:00.202621     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:00 embed-certs-629583 kubelet[776]: E1017 21:17:00.202841     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:17:13 embed-certs-629583 kubelet[776]: I1017 21:17:13.876837     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: I1017 21:17:14.157741     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: I1017 21:17:14.158042     776 scope.go:117] "RemoveContainer" containerID="1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: E1017 21:17:14.158198     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7fc3b7613495f459fd4fbc98d8a4f6fc15bd14dd358c1b58365ee9a9707f278f] <==
	2025/10/17 21:16:40 Using namespace: kubernetes-dashboard
	2025/10/17 21:16:40 Using in-cluster config to connect to apiserver
	2025/10/17 21:16:40 Using secret token for csrf signing
	2025/10/17 21:16:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:16:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:16:40 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:16:40 Generating JWE encryption key
	2025/10/17 21:16:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:16:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:16:41 Initializing JWE encryption key from synchronized object
	2025/10/17 21:16:41 Creating in-cluster Sidecar client
	2025/10/17 21:16:41 Serving insecurely on HTTP port: 9090
	2025/10/17 21:16:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:16:40 Starting overwatch
	
	
	==> storage-provisioner [80ca067c7617a34941511a2c6d8b81514e673e55ed6ad60b8c7bd37c4783280c] <==
	I1017 21:16:57.199843       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:16:57.220230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:16:57.220394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:16:57.231433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:00.688650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:04.949652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:08.549774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:11.603639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.630543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.643060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:14.645979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:17:14.646305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22cc3084-8a79-473b-b8ad-1d1d682dd739", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973 became leader
	I1017 21:17:14.646350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973!
	W1017 21:17:14.650238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.682817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:14.747262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973!
	W1017 21:17:16.689037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:16.696692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:18.700462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:18.714004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be] <==
	I1017 21:16:26.263142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:16:56.265208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629583 -n embed-certs-629583: exit status 2 (537.749082ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1017 21:17:19.723744  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-629583
helpers_test.go:243: (dbg) docker inspect embed-certs-629583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	        "Created": "2025-10-17T21:14:19.780499873Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 824372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:16:11.344055227Z",
	            "FinishedAt": "2025-10-17T21:16:10.468061507Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/hosts",
	        "LogPath": "/var/lib/docker/containers/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa-json.log",
	        "Name": "/embed-certs-629583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-629583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-629583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa",
	                "LowerDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/merged",
	                "UpperDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/diff",
	                "WorkDir": "/var/lib/docker/overlay2/03ab6ac739d2a8bec28669352ea03a27cd9ddd2a37f2409982cfafbcfef7a577/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-629583",
	                "Source": "/var/lib/docker/volumes/embed-certs-629583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-629583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-629583",
	                "name.minikube.sigs.k8s.io": "embed-certs-629583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6b41ca094c135716ccb7b4e9571c1921cb94e29cd721f72d77de041d88b3c1d2",
	            "SandboxKey": "/var/run/docker/netns/6b41ca094c13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33854"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33855"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33858"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33856"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33857"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-629583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:07:14:cd:5d:3c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9cf73a7bb458977ed299c7ce9cbca11369f8601f7b17d9b0ba6519ff0a5d4f48",
	                    "EndpointID": "0e0766a39347cdbdd18c76e7866b61bc295bab29e2114eaabae2e8d6bd3220b6",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-629583",
	                        "792e6eed90d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583: exit status 2 (524.985389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25
E1017 21:17:21.249885  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-629583 logs -n 25: (1.941426465s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p no-preload-820018 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ stop    │ -p no-preload-820018 --alsologtostderr -v=3                                                                                                                              │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ addons  │ enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ start   │ -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:14 UTC │
	│ image   │ old-k8s-version-521710 image list --format=json                                                                                                                          │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │ 17 Oct 25 21:13 UTC │
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                               │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                              │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                     │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                     │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                          │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                             │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                              │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                             │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:16:51
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:16:51.238015  827198 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:16:51.238135  827198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:51.238147  827198 out.go:374] Setting ErrFile to fd 2...
	I1017 21:16:51.238152  827198 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:16:51.238416  827198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:16:51.238827  827198 out.go:368] Setting JSON to false
	I1017 21:16:51.239852  827198 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14357,"bootTime":1760721454,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:16:51.239921  827198 start.go:141] virtualization:  
	I1017 21:16:51.243439  827198 out.go:179] * [default-k8s-diff-port-332023] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:16:51.247430  827198 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:16:51.247540  827198 notify.go:220] Checking for updates...
	I1017 21:16:51.253393  827198 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:16:51.256398  827198 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:51.259377  827198 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:16:51.262229  827198 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:16:51.265170  827198 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:16:51.268555  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:51.269123  827198 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:16:51.293623  827198 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:16:51.293735  827198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:51.355761  827198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:51.345768005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:51.355886  827198 docker.go:318] overlay module found
	I1017 21:16:51.359188  827198 out.go:179] * Using the docker driver based on existing profile
	I1017 21:16:51.362118  827198 start.go:305] selected driver: docker
	I1017 21:16:51.362135  827198 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:51.362240  827198 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:16:51.362981  827198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:16:51.414582  827198 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:16:51.404656432 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:16:51.414950  827198 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:16:51.414992  827198 cni.go:84] Creating CNI manager for ""
	I1017 21:16:51.415060  827198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:51.415133  827198 start.go:349] cluster config:
	{Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:51.420009  827198 out.go:179] * Starting "default-k8s-diff-port-332023" primary control-plane node in "default-k8s-diff-port-332023" cluster
	I1017 21:16:51.422846  827198 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:16:51.425793  827198 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:16:51.428618  827198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:51.428683  827198 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:16:51.428693  827198 cache.go:58] Caching tarball of preloaded images
	I1017 21:16:51.428779  827198 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:16:51.428789  827198 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:16:51.428900  827198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:16:51.429223  827198 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:16:51.453285  827198 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:16:51.453306  827198 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:16:51.453328  827198 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:16:51.453352  827198 start.go:360] acquireMachinesLock for default-k8s-diff-port-332023: {Name:mkd5f10687dc08061f4c474fbb408a2c8ae57413 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:16:51.453416  827198 start.go:364] duration metric: took 46.089µs to acquireMachinesLock for "default-k8s-diff-port-332023"
	I1017 21:16:51.453462  827198 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:16:51.453469  827198 fix.go:54] fixHost starting: 
	I1017 21:16:51.453722  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:51.470517  827198 fix.go:112] recreateIfNeeded on default-k8s-diff-port-332023: state=Stopped err=<nil>
	W1017 21:16:51.470560  827198 fix.go:138] unexpected machine state, will restart: <nil>
	W1017 21:16:51.601876  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:54.101268  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:16:51.473506  827198 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-332023" ...
	I1017 21:16:51.473588  827198 cli_runner.go:164] Run: docker start default-k8s-diff-port-332023
	I1017 21:16:51.748063  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:51.773071  827198 kic.go:430] container "default-k8s-diff-port-332023" state is running.
	I1017 21:16:51.773747  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:51.799093  827198 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/config.json ...
	I1017 21:16:51.799462  827198 machine.go:93] provisionDockerMachine start ...
	I1017 21:16:51.799527  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:51.817245  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:51.817679  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:51.817721  827198 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:16:51.818489  827198 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:16:54.970818  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:16:54.970853  827198 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-332023"
	I1017 21:16:54.970965  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:54.988483  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:54.988806  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:54.988821  827198 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-332023 && echo "default-k8s-diff-port-332023" | sudo tee /etc/hostname
	I1017 21:16:55.153680  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-332023
	
	I1017 21:16:55.153837  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.171204  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:55.171520  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:55.171545  827198 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-332023' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-332023/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-332023' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:16:55.319597  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:16:55.319681  827198 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:16:55.319710  827198 ubuntu.go:190] setting up certificates
	I1017 21:16:55.319720  827198 provision.go:84] configureAuth start
	I1017 21:16:55.319801  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:55.341686  827198 provision.go:143] copyHostCerts
	I1017 21:16:55.341758  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:16:55.341779  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:16:55.341863  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:16:55.341974  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:16:55.341985  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:16:55.342011  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:16:55.342085  827198 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:16:55.342094  827198 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:16:55.342117  827198 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:16:55.342185  827198 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-332023 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-332023 localhost minikube]
	I1017 21:16:55.475885  827198 provision.go:177] copyRemoteCerts
	I1017 21:16:55.475958  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:16:55.476031  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.494966  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:55.600410  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:16:55.621151  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1017 21:16:55.639005  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1017 21:16:55.659428  827198 provision.go:87] duration metric: took 339.684866ms to configureAuth
	I1017 21:16:55.659455  827198 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:16:55.659648  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:55.659757  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:55.677279  827198 main.go:141] libmachine: Using SSH client type: native
	I1017 21:16:55.677589  827198 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33859 <nil> <nil>}
	I1017 21:16:55.677612  827198 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:16:56.014121  827198 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:16:56.014148  827198 machine.go:96] duration metric: took 4.214672332s to provisionDockerMachine
	I1017 21:16:56.014176  827198 start.go:293] postStartSetup for "default-k8s-diff-port-332023" (driver="docker")
	I1017 21:16:56.014188  827198 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:16:56.014266  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:16:56.014342  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.036923  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.143296  827198 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:16:56.146606  827198 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:16:56.146636  827198 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:16:56.146647  827198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:16:56.146759  827198 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:16:56.146846  827198 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:16:56.146957  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:16:56.154362  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:56.172055  827198 start.go:296] duration metric: took 157.863247ms for postStartSetup
	I1017 21:16:56.172146  827198 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:16:56.172192  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.189860  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.296073  827198 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:16:56.300603  827198 fix.go:56] duration metric: took 4.84712682s for fixHost
	I1017 21:16:56.300629  827198 start.go:83] releasing machines lock for "default-k8s-diff-port-332023", held for 4.847202406s
	I1017 21:16:56.300698  827198 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-332023
	I1017 21:16:56.317268  827198 ssh_runner.go:195] Run: cat /version.json
	I1017 21:16:56.317325  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.317386  827198 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:16:56.317587  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:56.334815  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.343278  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:56.552681  827198 ssh_runner.go:195] Run: systemctl --version
	I1017 21:16:56.559431  827198 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:16:56.596376  827198 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:16:56.601377  827198 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:16:56.601464  827198 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:16:56.609398  827198 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:16:56.609423  827198 start.go:495] detecting cgroup driver to use...
	I1017 21:16:56.609475  827198 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:16:56.609552  827198 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:16:56.625165  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:16:56.637995  827198 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:16:56.638124  827198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:16:56.653681  827198 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:16:56.667194  827198 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:16:56.792584  827198 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:16:56.921981  827198 docker.go:234] disabling docker service ...
	I1017 21:16:56.922057  827198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:16:56.938129  827198 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:16:56.952044  827198 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:16:57.075449  827198 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:16:57.248970  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:16:57.264020  827198 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:16:57.278958  827198 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:16:57.279113  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.291954  827198 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:16:57.292078  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.302646  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.315696  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.325589  827198 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:16:57.334015  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.342814  827198 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.352511  827198 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:16:57.362077  827198 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:16:57.371057  827198 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:16:57.379433  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:57.509531  827198 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:16:57.656532  827198 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:16:57.656636  827198 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:16:57.660902  827198 start.go:563] Will wait 60s for crictl version
	I1017 21:16:57.660997  827198 ssh_runner.go:195] Run: which crictl
	I1017 21:16:57.664783  827198 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:16:57.690259  827198 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:16:57.690401  827198 ssh_runner.go:195] Run: crio --version
	I1017 21:16:57.718318  827198 ssh_runner.go:195] Run: crio --version
	I1017 21:16:57.755649  827198 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:16:57.758586  827198 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-332023 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:16:57.774256  827198 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1017 21:16:57.777973  827198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:57.787392  827198 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:16:57.787512  827198 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:16:57.787565  827198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:57.818896  827198 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:57.818922  827198 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:16:57.818977  827198 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:16:57.845290  827198 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:16:57.845311  827198 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:16:57.845319  827198 kubeadm.go:934] updating node { 192.168.85.2 8444 v1.34.1 crio true true} ...
	I1017 21:16:57.845472  827198 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-332023 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:16:57.845581  827198 ssh_runner.go:195] Run: crio config
	I1017 21:16:57.916262  827198 cni.go:84] Creating CNI manager for ""
	I1017 21:16:57.916282  827198 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:16:57.916304  827198 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1017 21:16:57.916339  827198 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-332023 NodeName:default-k8s-diff-port-332023 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:16:57.916463  827198 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-332023"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:16:57.916551  827198 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:16:57.924361  827198 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:16:57.924483  827198 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:16:57.932065  827198 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1017 21:16:57.945693  827198 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:16:57.958634  827198 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1017 21:16:57.972380  827198 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:16:57.976491  827198 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:16:57.986130  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:58.103617  827198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:58.126545  827198 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023 for IP: 192.168.85.2
	I1017 21:16:58.126608  827198 certs.go:195] generating shared ca certs ...
	I1017 21:16:58.126638  827198 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:58.126809  827198 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:16:58.126875  827198 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:16:58.126896  827198 certs.go:257] generating profile certs ...
	I1017 21:16:58.127024  827198 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/client.key
	I1017 21:16:58.127156  827198 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key.a4419414
	I1017 21:16:58.127235  827198 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key
	I1017 21:16:58.127377  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:16:58.127439  827198 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:16:58.127464  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:16:58.127522  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:16:58.127572  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:16:58.127637  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:16:58.127710  827198 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:16:58.128370  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:16:58.156088  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:16:58.177435  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:16:58.199855  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:16:58.221849  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1017 21:16:58.308948  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1017 21:16:58.356757  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:16:58.380333  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/default-k8s-diff-port-332023/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1017 21:16:58.400581  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:16:58.419902  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:16:58.439835  827198 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:16:58.458620  827198 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:16:58.481875  827198 ssh_runner.go:195] Run: openssl version
	I1017 21:16:58.488419  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:16:58.496700  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.500691  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.500801  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:16:58.542151  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:16:58.551009  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:16:58.559868  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.563395  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.563460  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:16:58.605374  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:16:58.613391  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:16:58.621983  827198 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.625919  827198 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.626032  827198 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:16:58.668612  827198 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:16:58.676944  827198 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:16:58.681376  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:16:58.732477  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:16:58.774893  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:16:58.820418  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:16:58.881595  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:16:58.947467  827198 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:16:59.035284  827198 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-332023 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-332023 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:16:59.035375  827198 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:16:59.035517  827198 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:16:59.088983  827198 cri.go:89] found id: "3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b"
	I1017 21:16:59.089007  827198 cri.go:89] found id: "dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7"
	I1017 21:16:59.089015  827198 cri.go:89] found id: "3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18"
	I1017 21:16:59.089019  827198 cri.go:89] found id: "da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5"
	I1017 21:16:59.089058  827198 cri.go:89] found id: ""
	I1017 21:16:59.089141  827198 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:16:59.105866  827198 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:16:59Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:16:59.105978  827198 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:16:59.118692  827198 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:16:59.118713  827198 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:16:59.118795  827198 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:16:59.129842  827198 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:16:59.130743  827198 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-332023" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:59.131403  827198 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-332023" cluster setting kubeconfig missing "default-k8s-diff-port-332023" context setting]
	I1017 21:16:59.132345  827198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.134305  827198 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:16:59.144745  827198 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1017 21:16:59.144806  827198 kubeadm.go:601] duration metric: took 26.069063ms to restartPrimaryControlPlane
	I1017 21:16:59.144826  827198 kubeadm.go:402] duration metric: took 109.555249ms to StartCluster
	I1017 21:16:59.144844  827198 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.144920  827198 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:16:59.146427  827198 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:16:59.146723  827198 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:16:59.147113  827198 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:16:59.147186  827198 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.147200  827198 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.147206  827198 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:16:59.147231  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.147406  827198 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:16:59.147447  827198 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.147456  827198 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.147462  827198 addons.go:247] addon dashboard should already be in state true
	I1017 21:16:59.147479  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.147690  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.148116  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.148424  827198 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-332023"
	I1017 21:16:59.148451  827198 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-332023"
	I1017 21:16:59.148726  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.154165  827198 out.go:179] * Verifying Kubernetes components...
	I1017 21:16:59.157699  827198 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:16:59.200539  827198 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:16:59.203584  827198 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:59.203608  827198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:16:59.203671  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.212715  827198 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-332023"
	W1017 21:16:59.212801  827198 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:16:59.212842  827198 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:16:59.213346  827198 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:16:59.223515  827198 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:16:59.230140  827198 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1017 21:16:56.101592  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	W1017 21:16:58.112215  824247 pod_ready.go:104] pod "coredns-66bc5c9577-7c4gn" is not "Ready", error: <nil>
	I1017 21:17:00.606512  824247 pod_ready.go:94] pod "coredns-66bc5c9577-7c4gn" is "Ready"
	I1017 21:17:00.606540  824247 pod_ready.go:86] duration metric: took 33.011183013s for pod "coredns-66bc5c9577-7c4gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.613423  824247 pod_ready.go:83] waiting for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.620942  824247 pod_ready.go:94] pod "etcd-embed-certs-629583" is "Ready"
	I1017 21:17:00.621019  824247 pod_ready.go:86] duration metric: took 7.570186ms for pod "etcd-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.624552  824247 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.632724  824247 pod_ready.go:94] pod "kube-apiserver-embed-certs-629583" is "Ready"
	I1017 21:17:00.632799  824247 pod_ready.go:86] duration metric: took 8.16792ms for pod "kube-apiserver-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.636053  824247 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:00.800479  824247 pod_ready.go:94] pod "kube-controller-manager-embed-certs-629583" is "Ready"
	I1017 21:17:00.800571  824247 pod_ready.go:86] duration metric: took 164.442111ms for pod "kube-controller-manager-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.000101  824247 pod_ready.go:83] waiting for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:16:59.233125  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:16:59.233151  827198 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:16:59.233227  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.267665  827198 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:59.267686  827198 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:16:59.267746  827198 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:16:59.275288  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.307363  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.313831  827198 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:16:59.525834  827198 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:16:59.539408  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:16:59.603053  827198 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:16:59.604710  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:16:59.604735  827198 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:16:59.636555  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:16:59.691677  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:16:59.691703  827198 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:16:59.751815  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:16:59.751855  827198 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:16:59.825521  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:16:59.825553  827198 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:16:59.869098  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:16:59.869129  827198 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:16:59.897734  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:16:59.897760  827198 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:16:59.928295  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:16:59.928336  827198 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:16:59.950861  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:16:59.950887  827198 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:16:59.973826  827198 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:16:59.973858  827198 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:16:59.996443  827198 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:17:01.399986  824247 pod_ready.go:94] pod "kube-proxy-p98l2" is "Ready"
	I1017 21:17:01.400066  824247 pod_ready.go:86] duration metric: took 399.876003ms for pod "kube-proxy-p98l2" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.599622  824247 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.999719  824247 pod_ready.go:94] pod "kube-scheduler-embed-certs-629583" is "Ready"
	I1017 21:17:01.999796  824247 pod_ready.go:86] duration metric: took 400.085877ms for pod "kube-scheduler-embed-certs-629583" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:01.999823  824247 pod_ready.go:40] duration metric: took 34.411466806s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:02.099638  824247 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:17:02.102860  824247 out.go:179] * Done! kubectl is now configured to use "embed-certs-629583" cluster and "default" namespace by default
	I1017 21:17:04.325105  827198 node_ready.go:49] node "default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:04.325133  827198 node_ready.go:38] duration metric: took 4.722049715s for node "default-k8s-diff-port-332023" to be "Ready" ...
	I1017 21:17:04.325169  827198 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:17:04.325247  827198 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:17:04.506493  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.967033904s)
	I1017 21:17:05.854666  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.218055461s)
	I1017 21:17:05.890212  827198 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.893725094s)
	I1017 21:17:05.890325  827198 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.565066386s)
	I1017 21:17:05.890425  827198 api_server.go:72] duration metric: took 6.743669909s to wait for apiserver process to appear ...
	I1017 21:17:05.890434  827198 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:17:05.890459  827198 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 21:17:05.894752  827198 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-332023 addons enable metrics-server
	
	I1017 21:17:05.897856  827198 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1017 21:17:05.900241  827198 addons.go:514] duration metric: took 6.753124377s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1017 21:17:05.911865  827198 api_server.go:279] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:17:05.911950  827198 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:17:06.390582  827198 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1017 21:17:06.400257  827198 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1017 21:17:06.401787  827198 api_server.go:141] control plane version: v1.34.1
	I1017 21:17:06.401819  827198 api_server.go:131] duration metric: took 511.376695ms to wait for apiserver health ...
	I1017 21:17:06.401829  827198 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:17:06.405904  827198 system_pods.go:59] 8 kube-system pods found
	I1017 21:17:06.405992  827198 system_pods.go:61] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:17:06.406026  827198 system_pods.go:61] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:17:06.406039  827198 system_pods.go:61] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:17:06.406048  827198 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:17:06.406056  827198 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:17:06.406062  827198 system_pods.go:61] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:17:06.406078  827198 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:17:06.406082  827198 system_pods.go:61] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Running
	I1017 21:17:06.406088  827198 system_pods.go:74] duration metric: took 4.253057ms to wait for pod list to return data ...
	I1017 21:17:06.406096  827198 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:17:06.409455  827198 default_sa.go:45] found service account: "default"
	I1017 21:17:06.409486  827198 default_sa.go:55] duration metric: took 3.378897ms for default service account to be created ...
	I1017 21:17:06.409500  827198 system_pods.go:116] waiting for k8s-apps to be running ...
	I1017 21:17:06.413245  827198 system_pods.go:86] 8 kube-system pods found
	I1017 21:17:06.413281  827198 system_pods.go:89] "coredns-66bc5c9577-nvmzl" [9748deef-241f-4101-a37b-e6aebe976464] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1017 21:17:06.413292  827198 system_pods.go:89] "etcd-default-k8s-diff-port-332023" [dbb1577d-9545-42b0-b5c4-cf8f82b6e13c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:17:06.413327  827198 system_pods.go:89] "kindnet-29xbg" [d2fd2528-5232-4574-a792-3be8eca99a9d] Running
	I1017 21:17:06.413335  827198 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-332023" [5b65158f-eaef-4cb1-a1a3-67f195c1dbc8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:17:06.413343  827198 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-332023" [2395ddb8-1184-4c22-8281-412b58f66b94] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:17:06.413355  827198 system_pods.go:89] "kube-proxy-rh2gh" [2c3d9c06-0fd9-448b-b4e6-872d16233b50] Running
	I1017 21:17:06.413362  827198 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-332023" [e6a48c9f-4e76-4d58-8c2d-161e52f7d580] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:17:06.413367  827198 system_pods.go:89] "storage-provisioner" [59ed862b-5b8f-42cd-92cd-331c3436056f] Running
	I1017 21:17:06.413374  827198 system_pods.go:126] duration metric: took 3.868799ms to wait for k8s-apps to be running ...
	I1017 21:17:06.413391  827198 system_svc.go:44] waiting for kubelet service to be running ....
	I1017 21:17:06.413454  827198 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:17:06.427621  827198 system_svc.go:56] duration metric: took 14.214772ms WaitForService to wait for kubelet
	I1017 21:17:06.427662  827198 kubeadm.go:586] duration metric: took 7.280907525s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1017 21:17:06.427720  827198 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:17:06.431049  827198 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:17:06.431172  827198 node_conditions.go:123] node cpu capacity is 2
	I1017 21:17:06.431194  827198 node_conditions.go:105] duration metric: took 3.461598ms to run NodePressure ...
	I1017 21:17:06.431210  827198 start.go:241] waiting for startup goroutines ...
	I1017 21:17:06.431241  827198 start.go:246] waiting for cluster config update ...
	I1017 21:17:06.431258  827198 start.go:255] writing updated cluster config ...
	I1017 21:17:06.431558  827198 ssh_runner.go:195] Run: rm -f paused
	I1017 21:17:06.435811  827198 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:06.440356  827198 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	W1017 21:17:08.447599  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:10.447686  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:12.949910  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:15.447467  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.17441779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178350474Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178511477Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.178592389Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182435989Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182598256Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.182687611Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187658568Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187698167Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.187724317Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.191791181Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:06 embed-certs-629583 crio[652]: time="2025-10-17T21:17:06.191832979Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.877409143Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ad6bb547-8ccb-48c4-bd84-1169586b0323 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.879794756Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6257001a-6d1b-41bd-8ea7-c0de75fb9787 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.881333523Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=bb58099b-d55d-4def-b67f-7951bdbf3b85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.881880549Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.914859828Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:13 embed-certs-629583 crio[652]: time="2025-10-17T21:17:13.91559943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.018180944Z" level=info msg="Created container 1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=bb58099b-d55d-4def-b67f-7951bdbf3b85 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.023349394Z" level=info msg="Starting container: 1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db" id=4e977bf6-3176-401b-b6e0-749b1682fd08 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.034482553Z" level=info msg="Started container" PID=1731 containerID=1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper id=4e977bf6-3176-401b-b6e0-749b1682fd08 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b0d63524e6ccd1725064fb811e513176c199c45f78dde2d2e39edfde85719a5
	Oct 17 21:17:14 embed-certs-629583 conmon[1729]: conmon 1961abb57bee650bdee8 <ninfo>: container 1731 exited with status 1
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.162054225Z" level=info msg="Removing container: 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.185149312Z" level=info msg="Error loading conmon cgroup of container 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d: cgroup deleted" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 17 21:17:14 embed-certs-629583 crio[652]: time="2025-10-17T21:17:14.195599656Z" level=info msg="Removed container 880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g/dashboard-metrics-scraper" id=4aac8ff0-e267-4ad5-86e6-f3f53547975b name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	1961abb57bee6       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago        Exited              dashboard-metrics-scraper   3                   3b0d63524e6cc       dashboard-metrics-scraper-6ffb444bf9-4rn7g   kubernetes-dashboard
	80ca067c7617a       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   2dda121d69624       storage-provisioner                          kube-system
	7fc3b7613495f       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   40 seconds ago       Running             kubernetes-dashboard        0                   2ad7b43df11e5       kubernetes-dashboard-855c9754f9-j59qn        kubernetes-dashboard
	e6bdde4121a84       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           55 seconds ago       Running             busybox                     1                   ac989fc53d4ed       busybox                                      default
	2a2555eb03e49       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   767d6589b7e06       kube-proxy-p98l2                             kube-system
	17885c6005bae       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   a972881142a11       coredns-66bc5c9577-7c4gn                     kube-system
	417b3b3922976       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           55 seconds ago       Running             kindnet-cni                 1                   2d6d2e3052784       kindnet-tqd9k                                kube-system
	fb31580516aa3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   2dda121d69624       storage-provisioner                          kube-system
	d0a52582cdef3       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   641db263ccaf0       kube-apiserver-embed-certs-629583            kube-system
	0565e636fbd6f       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9f56a4c0ef000       kube-scheduler-embed-certs-629583            kube-system
	68303ee075b96       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   609a054cad244       etcd-embed-certs-629583                      kube-system
	024a29d84fb3e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   47159af6c028c       kube-controller-manager-embed-certs-629583   kube-system
	
	
	==> coredns [17885c6005baef49209583e7551e55679f0e578cbfde4c129f765f29985927da] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41762 - 11137 "HINFO IN 3775923736227160177.3505267854412391022. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012305261s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-629583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-629583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=embed-certs-629583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_14_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:14:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-629583
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:14:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:16:56 +0000   Fri, 17 Oct 2025 21:15:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-629583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                c9c2881c-9e92-49bc-ace3-9a4a72830c65
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-7c4gn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m29s
	  kube-system                 etcd-embed-certs-629583                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m34s
	  kube-system                 kindnet-tqd9k                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m29s
	  kube-system                 kube-apiserver-embed-certs-629583             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-controller-manager-embed-certs-629583    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 kube-proxy-p98l2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 kube-scheduler-embed-certs-629583             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m28s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4rn7g    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-j59qn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m28s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Warning  CgroupV1                 2m42s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m42s (x8 over 2m42s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m42s (x8 over 2m42s)  kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m42s (x8 over 2m42s)  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m34s                  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m34s                  kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m34s                  kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m30s                  node-controller  Node embed-certs-629583 event: Registered Node embed-certs-629583 in Controller
	  Normal   NodeReady                108s                   kubelet          Node embed-certs-629583 status is now: NodeReady
	  Normal   Starting                 63s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 63s)      kubelet          Node embed-certs-629583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 63s)      kubelet          Node embed-certs-629583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 63s)      kubelet          Node embed-certs-629583 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           52s                    node-controller  Node embed-certs-629583 event: Registered Node embed-certs-629583 in Controller
	
	
	==> dmesg <==
	[Oct17 20:52] overlayfs: idmapped layers are currently not supported
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [68303ee075b96d214df46d49f1815eb015cc1ba7193839df388f7f38d171f004] <==
	{"level":"warn","ts":"2025-10-17T21:16:22.875703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.885931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.897083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.944388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:22.980473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.041050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.048055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.093160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.118450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.141625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.156894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.178709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.193240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.212496Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.233296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.247039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.277766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.295358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.312803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.335899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.376839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.438675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.443859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.467868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:16:23.570990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59672","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:17:21 up  3:59,  0 user,  load average: 4.40, 3.74, 3.26
	Linux embed-certs-629583 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [417b3b3922976d0c223b2a75f1f19f847ad7152221fd7077a44ee3c4c849f25b] <==
	I1017 21:16:25.947925       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:16:25.950807       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:16:25.950927       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:16:25.950939       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:16:25.950952       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:16:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:16:26.165911       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:16:26.165979       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:16:26.166012       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:16:26.166770       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:16:56.166718       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:16:56.166931       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1017 21:16:56.167066       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:16:56.167236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1017 21:16:57.567089       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:16:57.567227       1 metrics.go:72] Registering metrics
	I1017 21:16:57.567309       1 controller.go:711] "Syncing nftables rules"
	I1017 21:17:06.169631       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:17:06.169742       1 main.go:301] handling current node
	I1017 21:17:16.166038       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1017 21:17:16.166148       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d0a52582cdef3beb68214ae0dbfcb1593501e07a37d0d000ded6f52417206ab3] <==
	I1017 21:16:25.051790       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:16:25.059533       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:16:25.092150       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1017 21:16:25.092269       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:16:25.111852       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:16:25.112313       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:16:25.117696       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:16:25.139158       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:16:25.459322       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:16:25.142393       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:16:25.142489       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:16:25.157357       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:16:25.197006       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:16:25.438567       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1017 21:16:25.610719       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:16:25.667387       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:16:26.163979       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:16:26.380612       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:16:26.458777       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:16:26.496883       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:16:26.688818       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.24.169"}
	I1017 21:16:26.734288       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.90.202"}
	I1017 21:16:29.229895       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:16:29.627571       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:16:29.728668       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [024a29d84fb3e23f712d6876e4d7aca3cfe2fed1d32490ac0391b3d1a0a0767b] <==
	I1017 21:16:29.240816       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:16:29.246062       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:16:29.249279       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:16:29.260730       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1017 21:16:29.260826       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1017 21:16:29.260861       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1017 21:16:29.260867       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1017 21:16:29.260873       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:16:29.263886       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:16:29.264004       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:16:29.264088       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-629583"
	I1017 21:16:29.264132       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:16:29.267066       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:16:29.267224       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1017 21:16:29.271784       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:16:29.271877       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1017 21:16:29.271914       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 21:16:29.272107       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:16:29.272433       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:16:29.272494       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:16:29.273013       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:16:29.279648       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:16:29.279675       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:16:29.279683       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:16:29.285245       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2a2555eb03e49c384cb5cdfff4f5e1c95b87ec94104bcbf346d1410e7ef452c0] <==
	I1017 21:16:26.561957       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:16:27.113205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:16:27.239580       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:16:27.239732       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:16:27.239849       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:16:27.416806       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:16:27.422019       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:16:27.428623       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:16:27.428984       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:16:27.428999       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:16:27.430079       1 config.go:200] "Starting service config controller"
	I1017 21:16:27.430122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:16:27.433775       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:16:27.433854       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:16:27.433897       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:16:27.433925       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:16:27.438843       1 config.go:309] "Starting node config controller"
	I1017 21:16:27.438936       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:16:27.438968       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:16:27.530889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:16:27.534227       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:16:27.534260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [0565e636fbd6fe5685e50f65d6f7a5fd5e693e8900d853a8bdffd2ddb85790e0] <==
	I1017 21:16:25.873281       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:16:28.331183       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:16:28.331212       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:16:28.337006       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:16:28.337105       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:16:28.337167       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.337199       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.337240       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:16:28.337270       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:16:28.337537       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:16:28.337620       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:16:28.437304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:16:28.437326       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:16:28.437357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:16:30 embed-certs-629583 kubelet[776]: W1017 21:16:30.266881     776 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/792e6eed90d975519df5d641d8fc1a7325caf5c8302f39c522d6612ba52cb7fa/crio-2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1 WatchSource:0}: Error finding container 2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1: Status 404 returned error can't find the container with id 2ad7b43df11e50913fa83672ad705739860becbe33053358c890236cb9f4beb1
	Oct 17 21:16:30 embed-certs-629583 kubelet[776]: I1017 21:16:30.499864     776 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 17 21:16:35 embed-certs-629583 kubelet[776]: I1017 21:16:35.018195     776 scope.go:117] "RemoveContainer" containerID="0169d7167d27a1fc95d8dd44030834db0f5f929505e43474f1e787f476aa48b8"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: I1017 21:16:36.026362     776 scope.go:117] "RemoveContainer" containerID="0169d7167d27a1fc95d8dd44030834db0f5f929505e43474f1e787f476aa48b8"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: I1017 21:16:36.026773     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:36 embed-certs-629583 kubelet[776]: E1017 21:16:36.026951     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:37 embed-certs-629583 kubelet[776]: I1017 21:16:37.031669     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:37 embed-certs-629583 kubelet[776]: E1017 21:16:37.031829     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:40 embed-certs-629583 kubelet[776]: I1017 21:16:40.203142     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:40 embed-certs-629583 kubelet[776]: E1017 21:16:40.203881     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:50 embed-certs-629583 kubelet[776]: I1017 21:16:50.876517     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:51 embed-certs-629583 kubelet[776]: I1017 21:16:51.079917     776 scope.go:117] "RemoveContainer" containerID="26b1c3fd628f272477968c8c5dced6200e4fd8d5da81561007703fdfab9be398"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: I1017 21:16:52.087791     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: E1017 21:16:52.088769     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:16:52 embed-certs-629583 kubelet[776]: I1017 21:16:52.116098     776 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-j59qn" podStartSLOduration=12.459569211 podStartE2EDuration="23.115766806s" podCreationTimestamp="2025-10-17 21:16:29 +0000 UTC" firstStartedPulling="2025-10-17 21:16:30.269774233 +0000 UTC m=+11.539106448" lastFinishedPulling="2025-10-17 21:16:40.92597182 +0000 UTC m=+22.195304043" observedRunningTime="2025-10-17 21:16:41.073876845 +0000 UTC m=+22.343209076" watchObservedRunningTime="2025-10-17 21:16:52.115766806 +0000 UTC m=+33.385099029"
	Oct 17 21:16:57 embed-certs-629583 kubelet[776]: I1017 21:16:57.100723     776 scope.go:117] "RemoveContainer" containerID="fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be"
	Oct 17 21:17:00 embed-certs-629583 kubelet[776]: I1017 21:17:00.202621     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:00 embed-certs-629583 kubelet[776]: E1017 21:17:00.202841     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:17:13 embed-certs-629583 kubelet[776]: I1017 21:17:13.876837     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: I1017 21:17:14.157741     776 scope.go:117] "RemoveContainer" containerID="880e52a8c6c05374687619bf3704aa85b0f0922ff8f343afc1438f7b6f93b53d"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: I1017 21:17:14.158042     776 scope.go:117] "RemoveContainer" containerID="1961abb57bee650bdee816b4036d5e029c8fb390e888bafaf8e01a5d8cf1f2db"
	Oct 17 21:17:14 embed-certs-629583 kubelet[776]: E1017 21:17:14.158198     776 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4rn7g_kubernetes-dashboard(8b59a0d5-6aa8-4c81-b74f-435e5af1e95b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4rn7g" podUID="8b59a0d5-6aa8-4c81-b74f-435e5af1e95b"
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:17:14 embed-certs-629583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [7fc3b7613495f459fd4fbc98d8a4f6fc15bd14dd358c1b58365ee9a9707f278f] <==
	2025/10/17 21:16:40 Using namespace: kubernetes-dashboard
	2025/10/17 21:16:40 Using in-cluster config to connect to apiserver
	2025/10/17 21:16:40 Using secret token for csrf signing
	2025/10/17 21:16:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:16:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:16:40 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:16:40 Generating JWE encryption key
	2025/10/17 21:16:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:16:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:16:41 Initializing JWE encryption key from synchronized object
	2025/10/17 21:16:41 Creating in-cluster Sidecar client
	2025/10/17 21:16:41 Serving insecurely on HTTP port: 9090
	2025/10/17 21:16:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:16:40 Starting overwatch
	
	
	==> storage-provisioner [80ca067c7617a34941511a2c6d8b81514e673e55ed6ad60b8c7bd37c4783280c] <==
	I1017 21:16:57.199843       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1017 21:16:57.220230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1017 21:16:57.220394       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1017 21:16:57.231433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:00.688650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:04.949652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:08.549774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:11.603639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.630543       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.643060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:14.645979       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:17:14.646305       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"22cc3084-8a79-473b-b8ad-1d1d682dd739", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973 became leader
	I1017 21:17:14.646350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973!
	W1017 21:17:14.650238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:14.682817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:14.747262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-629583_2245f6f6-968c-49a8-8b04-0095d03eb973!
	W1017 21:17:16.689037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:16.696692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:18.700462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:18.714004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:20.718848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:20.736508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [fb31580516aa393401bf8123dd5ce73b8b2456e7e4c593cce5da052471b2b0be] <==
	I1017 21:16:26.263142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:16:56.265208       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629583 -n embed-certs-629583
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-629583 -n embed-certs-629583: exit status 2 (360.376632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-629583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (8.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-332023 --alsologtostderr -v=1
E1017 21:18:02.211176  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-332023 --alsologtostderr -v=1: exit status 80 (2.214948059s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-332023 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 21:18:01.393585  833060 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:18:01.393840  833060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:01.393879  833060 out.go:374] Setting ErrFile to fd 2...
	I1017 21:18:01.393903  833060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:01.394214  833060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:18:01.394543  833060 out.go:368] Setting JSON to false
	I1017 21:18:01.394621  833060 mustload.go:65] Loading cluster: default-k8s-diff-port-332023
	I1017 21:18:01.395150  833060 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:01.398597  833060 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-332023 --format={{.State.Status}}
	I1017 21:18:01.422719  833060 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:18:01.423033  833060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:01.488729  833060 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 21:18:01.473740474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:01.489418  833060 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-332023 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 21:18:01.492771  833060 out.go:179] * Pausing node default-k8s-diff-port-332023 ... 
	I1017 21:18:01.496466  833060 host.go:66] Checking if "default-k8s-diff-port-332023" exists ...
	I1017 21:18:01.496810  833060 ssh_runner.go:195] Run: systemctl --version
	I1017 21:18:01.496856  833060 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-332023
	I1017 21:18:01.516260  833060 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33859 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/default-k8s-diff-port-332023/id_rsa Username:docker}
	I1017 21:18:01.626660  833060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:01.640193  833060 pause.go:52] kubelet running: true
	I1017 21:18:01.640282  833060 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:01.923192  833060 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:01.923284  833060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:02.020227  833060 cri.go:89] found id: "c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77"
	I1017 21:18:02.020249  833060 cri.go:89] found id: "1a5a39b4bbd639ee14240f6d0ab58f5317fd0a79c4d8b4ca7f73c246bd827c65"
	I1017 21:18:02.020255  833060 cri.go:89] found id: "a1dce463c036c682d336727ebc5030e6c9acb8a703ec87097c08b12c202fc8bb"
	I1017 21:18:02.020258  833060 cri.go:89] found id: "6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9"
	I1017 21:18:02.020262  833060 cri.go:89] found id: "98f498aab54612da243b71a7d5b7189c25ffc04ef6e6f4d23431cb88b69ee3f9"
	I1017 21:18:02.020265  833060 cri.go:89] found id: "3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b"
	I1017 21:18:02.020268  833060 cri.go:89] found id: "dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7"
	I1017 21:18:02.020270  833060 cri.go:89] found id: "3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18"
	I1017 21:18:02.020274  833060 cri.go:89] found id: "da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5"
	I1017 21:18:02.020283  833060 cri.go:89] found id: "807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	I1017 21:18:02.020287  833060 cri.go:89] found id: "c0cc1e2037d3cbd63794dd670636bf547be7c76d91e90dc18187d5fc6258f357"
	I1017 21:18:02.020290  833060 cri.go:89] found id: ""
	I1017 21:18:02.020348  833060 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:02.032777  833060 retry.go:31] will retry after 187.868561ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:02Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:02.221202  833060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:02.236179  833060 pause.go:52] kubelet running: false
	I1017 21:18:02.236248  833060 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:02.548392  833060 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:02.548468  833060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:02.688656  833060 cri.go:89] found id: "c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77"
	I1017 21:18:02.688680  833060 cri.go:89] found id: "1a5a39b4bbd639ee14240f6d0ab58f5317fd0a79c4d8b4ca7f73c246bd827c65"
	I1017 21:18:02.688685  833060 cri.go:89] found id: "a1dce463c036c682d336727ebc5030e6c9acb8a703ec87097c08b12c202fc8bb"
	I1017 21:18:02.688689  833060 cri.go:89] found id: "6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9"
	I1017 21:18:02.688692  833060 cri.go:89] found id: "98f498aab54612da243b71a7d5b7189c25ffc04ef6e6f4d23431cb88b69ee3f9"
	I1017 21:18:02.688695  833060 cri.go:89] found id: "3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b"
	I1017 21:18:02.688698  833060 cri.go:89] found id: "dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7"
	I1017 21:18:02.688701  833060 cri.go:89] found id: "3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18"
	I1017 21:18:02.688705  833060 cri.go:89] found id: "da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5"
	I1017 21:18:02.688711  833060 cri.go:89] found id: "807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	I1017 21:18:02.688714  833060 cri.go:89] found id: "c0cc1e2037d3cbd63794dd670636bf547be7c76d91e90dc18187d5fc6258f357"
	I1017 21:18:02.688717  833060 cri.go:89] found id: ""
	I1017 21:18:02.688771  833060 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:02.705633  833060 retry.go:31] will retry after 370.568591ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:02Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:03.077251  833060 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:03.094461  833060 pause.go:52] kubelet running: false
	I1017 21:18:03.094538  833060 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:03.371821  833060 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:03.371899  833060 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:03.481469  833060 cri.go:89] found id: "c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77"
	I1017 21:18:03.481491  833060 cri.go:89] found id: "1a5a39b4bbd639ee14240f6d0ab58f5317fd0a79c4d8b4ca7f73c246bd827c65"
	I1017 21:18:03.481495  833060 cri.go:89] found id: "a1dce463c036c682d336727ebc5030e6c9acb8a703ec87097c08b12c202fc8bb"
	I1017 21:18:03.481498  833060 cri.go:89] found id: "6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9"
	I1017 21:18:03.481501  833060 cri.go:89] found id: "98f498aab54612da243b71a7d5b7189c25ffc04ef6e6f4d23431cb88b69ee3f9"
	I1017 21:18:03.481505  833060 cri.go:89] found id: "3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b"
	I1017 21:18:03.481507  833060 cri.go:89] found id: "dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7"
	I1017 21:18:03.481510  833060 cri.go:89] found id: "3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18"
	I1017 21:18:03.481513  833060 cri.go:89] found id: "da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5"
	I1017 21:18:03.481519  833060 cri.go:89] found id: "807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	I1017 21:18:03.481522  833060 cri.go:89] found id: "c0cc1e2037d3cbd63794dd670636bf547be7c76d91e90dc18187d5fc6258f357"
	I1017 21:18:03.481525  833060 cri.go:89] found id: ""
	I1017 21:18:03.481571  833060 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:03.504378  833060 out.go:203] 
	W1017 21:18:03.507446  833060 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 21:18:03.507469  833060 out.go:285] * 
	* 
	W1017 21:18:03.515524  833060 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 21:18:03.518426  833060 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-332023 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-332023
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-332023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	        "Created": "2025-10-17T21:15:10.315339717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 827327,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:16:51.502994319Z",
	            "FinishedAt": "2025-10-17T21:16:50.519715874Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98-json.log",
	        "Name": "/default-k8s-diff-port-332023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-332023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-332023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	                "LowerDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-332023",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-332023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-332023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "decc947f5fe25636b1c4d22828a045cd833da65105c1af49ef1a4cd8aa343ec6",
	            "SandboxKey": "/var/run/docker/netns/decc947f5fe2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-332023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:ec:5f:c8:c8:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26a553f884b09380ce04b950347080a804cedc891493065a8f217a57e449901d",
	                    "EndpointID": "03deacc45da31451e520bf4e1621196d392192dce1fa4eade20b4afa7f7d06a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-332023",
	                        "cbf8d10c5cde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
E1017 21:18:03.755952  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023: exit status 2 (447.554711ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25: (1.862728347s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ pause   │ -p old-k8s-version-521710 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:13 UTC │                     │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                                                                                                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ image   │ default-k8s-diff-port-332023 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-332023 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:17:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:17:25.896638  830770 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:17:25.896817  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.896848  830770 out.go:374] Setting ErrFile to fd 2...
	I1017 21:17:25.896871  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.897169  830770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:17:25.897638  830770 out.go:368] Setting JSON to false
	I1017 21:17:25.898672  830770 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14392,"bootTime":1760721454,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:17:25.898772  830770 start.go:141] virtualization:  
	I1017 21:17:25.903008  830770 out.go:179] * [newest-cni-229231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:17:25.906552  830770 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:17:25.906622  830770 notify.go:220] Checking for updates...
	I1017 21:17:25.913253  830770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:17:25.916417  830770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:17:25.923280  830770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:17:25.926423  830770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:17:25.929564  830770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:17:25.933973  830770 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:25.934131  830770 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:17:25.970809  830770 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:17:25.970940  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.033338  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.022814652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.033459  830770 docker.go:318] overlay module found
	I1017 21:17:26.036699  830770 out.go:179] * Using the docker driver based on user configuration
	I1017 21:17:26.039590  830770 start.go:305] selected driver: docker
	I1017 21:17:26.039616  830770 start.go:925] validating driver "docker" against <nil>
	I1017 21:17:26.039631  830770 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:17:26.040431  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.110477  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.100667774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.110672  830770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 21:17:26.110712  830770 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 21:17:26.110983  830770 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:17:26.113917  830770 out.go:179] * Using Docker driver with root privileges
	I1017 21:17:26.116863  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:26.116938  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:26.116952  830770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:17:26.117030  830770 start.go:349] cluster config:
	{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:26.120091  830770 out.go:179] * Starting "newest-cni-229231" primary control-plane node in "newest-cni-229231" cluster
	I1017 21:17:26.122942  830770 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:17:26.125973  830770 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:17:26.128763  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.128824  830770 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:17:26.128857  830770 cache.go:58] Caching tarball of preloaded images
	I1017 21:17:26.128860  830770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:17:26.128952  830770 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:17:26.128962  830770 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:17:26.129077  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:26.129096  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json: {Name:mk4a965455fc1745973969f97e2671685387c291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:26.148441  830770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:17:26.148467  830770 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:17:26.148486  830770 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:17:26.148514  830770 start.go:360] acquireMachinesLock for newest-cni-229231: {Name:mk13ee1c4f50a5b33a03132c2a1b074ef28a6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:17:26.148627  830770 start.go:364] duration metric: took 90.635µs to acquireMachinesLock for "newest-cni-229231"
	I1017 21:17:26.148659  830770 start.go:93] Provisioning new machine with config: &{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:17:26.148730  830770 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:17:22.446843  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:24.447029  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:26.152341  830770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:17:26.152619  830770 start.go:159] libmachine.API.Create for "newest-cni-229231" (driver="docker")
	I1017 21:17:26.152666  830770 client.go:168] LocalClient.Create starting
	I1017 21:17:26.152739  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:17:26.152774  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152788  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.152843  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:17:26.152867  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152881  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.153247  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:17:26.169488  830770 cli_runner.go:211] docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:17:26.169583  830770 network_create.go:284] running [docker network inspect newest-cni-229231] to gather additional debugging logs...
	I1017 21:17:26.169604  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231
	W1017 21:17:26.186172  830770 cli_runner.go:211] docker network inspect newest-cni-229231 returned with exit code 1
	I1017 21:17:26.186205  830770 network_create.go:287] error running [docker network inspect newest-cni-229231]: docker network inspect newest-cni-229231: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-229231 not found
	I1017 21:17:26.186220  830770 network_create.go:289] output of [docker network inspect newest-cni-229231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-229231 not found
	
	** /stderr **
	I1017 21:17:26.186329  830770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:26.204008  830770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:17:26.204497  830770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:17:26.204816  830770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:17:26.205333  830770 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d7480}
	I1017 21:17:26.205356  830770 network_create.go:124] attempt to create docker network newest-cni-229231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 21:17:26.205416  830770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-229231 newest-cni-229231
	I1017 21:17:26.265199  830770 network_create.go:108] docker network newest-cni-229231 192.168.76.0/24 created
	I1017 21:17:26.265233  830770 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-229231" container
	I1017 21:17:26.265306  830770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:17:26.281495  830770 cli_runner.go:164] Run: docker volume create newest-cni-229231 --label name.minikube.sigs.k8s.io=newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:17:26.300695  830770 oci.go:103] Successfully created a docker volume newest-cni-229231
	I1017 21:17:26.300795  830770 cli_runner.go:164] Run: docker run --rm --name newest-cni-229231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --entrypoint /usr/bin/test -v newest-cni-229231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:17:26.857294  830770 oci.go:107] Successfully prepared a docker volume newest-cni-229231
	I1017 21:17:26.857355  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.857375  830770 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:17:26.857450  830770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:17:26.447334  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:28.447522  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:30.946881  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:31.268041  830770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.410549994s)
	I1017 21:17:31.268068  830770 kic.go:203] duration metric: took 4.410689736s to extract preloaded images to volume ...
	W1017 21:17:31.268210  830770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:17:31.268320  830770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:17:31.323431  830770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-229231 --name newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-229231 --network newest-cni-229231 --ip 192.168.76.2 --volume newest-cni-229231:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:17:31.638855  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Running}}
	I1017 21:17:31.661125  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.686266  830770 cli_runner.go:164] Run: docker exec newest-cni-229231 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:17:31.737826  830770 oci.go:144] the created container "newest-cni-229231" has a running status.
	I1017 21:17:31.737867  830770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa...
	I1017 21:17:31.921397  830770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:17:31.946133  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.973908  830770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:17:31.974143  830770 kic_runner.go:114] Args: [docker exec --privileged newest-cni-229231 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:17:32.031065  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:32.052790  830770 machine.go:93] provisionDockerMachine start ...
	I1017 21:17:32.052900  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:32.076724  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:32.077059  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:32.077077  830770 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:17:32.077758  830770 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:17:35.230904  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.230933  830770 ubuntu.go:182] provisioning hostname "newest-cni-229231"
	I1017 21:17:35.230996  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.249668  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.249988  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.250000  830770 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229231 && echo "newest-cni-229231" | sudo tee /etc/hostname
	I1017 21:17:35.413958  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.414035  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.435057  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.435455  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.435488  830770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229231/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:17:35.591708  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:17:35.591799  830770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:17:35.591862  830770 ubuntu.go:190] setting up certificates
	I1017 21:17:35.591895  830770 provision.go:84] configureAuth start
	I1017 21:17:35.591995  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:35.609144  830770 provision.go:143] copyHostCerts
	I1017 21:17:35.609297  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:17:35.609316  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:17:35.609405  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:17:35.609552  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:17:35.609557  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:17:35.609584  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:17:35.609634  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:17:35.609639  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:17:35.609661  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:17:35.609709  830770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229231 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-229231]
	W1017 21:17:33.446457  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:35.448550  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:35.925977  830770 provision.go:177] copyRemoteCerts
	I1017 21:17:35.926048  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:17:35.926101  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.946877  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.055784  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:17:36.078722  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:17:36.099207  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 21:17:36.117673  830770 provision.go:87] duration metric: took 525.737204ms to configureAuth
	I1017 21:17:36.117698  830770 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:17:36.117893  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:36.118005  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.135299  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:36.135606  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:36.135628  830770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:17:36.529551  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:17:36.529577  830770 machine.go:96] duration metric: took 4.476759507s to provisionDockerMachine
	I1017 21:17:36.529587  830770 client.go:171] duration metric: took 10.376909381s to LocalClient.Create
	I1017 21:17:36.529600  830770 start.go:167] duration metric: took 10.3769818s to libmachine.API.Create "newest-cni-229231"
	I1017 21:17:36.529608  830770 start.go:293] postStartSetup for "newest-cni-229231" (driver="docker")
	I1017 21:17:36.529622  830770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:17:36.529693  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:17:36.529734  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.548863  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.656261  830770 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:17:36.659923  830770 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:17:36.659952  830770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:17:36.659963  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:17:36.660021  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:17:36.660120  830770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:17:36.660228  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:17:36.668164  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:36.700480  830770 start.go:296] duration metric: took 170.852961ms for postStartSetup
	I1017 21:17:36.700912  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.718213  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:36.718503  830770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:17:36.718554  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.736166  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.840599  830770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:17:36.845564  830770 start.go:128] duration metric: took 10.696817914s to createHost
	I1017 21:17:36.845590  830770 start.go:83] releasing machines lock for "newest-cni-229231", held for 10.69694859s
	I1017 21:17:36.845663  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.870955  830770 ssh_runner.go:195] Run: cat /version.json
	I1017 21:17:36.871006  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.871064  830770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:17:36.871169  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.890159  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.893006  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:37.122070  830770 ssh_runner.go:195] Run: systemctl --version
	I1017 21:17:37.128670  830770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:17:37.168991  830770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:17:37.173298  830770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:17:37.173394  830770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:17:37.205394  830770 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:17:37.205430  830770 start.go:495] detecting cgroup driver to use...
	I1017 21:17:37.205465  830770 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:17:37.205526  830770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:17:37.224366  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:17:37.237784  830770 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:17:37.237852  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:17:37.256543  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:17:37.277284  830770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:17:37.414655  830770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:17:37.541005  830770 docker.go:234] disabling docker service ...
	I1017 21:17:37.541103  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:17:37.564030  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:17:37.577238  830770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:17:37.700062  830770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:17:37.826057  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:17:37.839922  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:17:37.860612  830770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:17:37.860715  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.870163  830770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:17:37.870267  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.879740  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.888727  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.897649  830770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:17:37.905938  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.914957  830770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.937609  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.948540  830770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:17:37.957167  830770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:17:37.964892  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.099253  830770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:17:38.240063  830770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:17:38.240136  830770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:17:38.244811  830770 start.go:563] Will wait 60s for crictl version
	I1017 21:17:38.244925  830770 ssh_runner.go:195] Run: which crictl
	I1017 21:17:38.249281  830770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:17:38.275725  830770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:17:38.275883  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.308570  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.340791  830770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:17:38.343764  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:38.370087  830770 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:17:38.375055  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.389020  830770 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 21:17:38.391841  830770 kubeadm.go:883] updating cluster {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:17:38.391994  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:38.392081  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.432568  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.432593  830770 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:17:38.432650  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.460578  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.460602  830770 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:17:38.460611  830770 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:17:38.460723  830770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-229231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:17:38.460816  830770 ssh_runner.go:195] Run: crio config
	I1017 21:17:38.516431  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:38.516456  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:38.516477  830770 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 21:17:38.516510  830770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229231 NodeName:newest-cni-229231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:17:38.516658  830770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229231"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:17:38.516740  830770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:17:38.526469  830770 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:17:38.526538  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:17:38.534666  830770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:17:38.547630  830770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:17:38.561449  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 21:17:38.575259  830770 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:17:38.578744  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.588729  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.701322  830770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:17:38.717843  830770 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231 for IP: 192.168.76.2
	I1017 21:17:38.717915  830770 certs.go:195] generating shared ca certs ...
	I1017 21:17:38.717946  830770 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:38.718125  830770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:17:38.718210  830770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:17:38.718245  830770 certs.go:257] generating profile certs ...
	I1017 21:17:38.718333  830770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key
	I1017 21:17:38.718359  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt with IP's: []
	I1017 21:17:39.230829  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt ...
	I1017 21:17:39.230858  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt: {Name:mk374f432cfcb8f38f0f3620aea987f930973189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231059  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key ...
	I1017 21:17:39.231074  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key: {Name:mk9a5a91826f85ec18ceb8bb2c0d21490d528c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231190  830770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c
	I1017 21:17:39.231212  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 21:17:39.632870  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c ...
	I1017 21:17:39.632901  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c: {Name:mk1fc1882cd3e285fbb7cde7fecc4a73bff5842b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633094  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c ...
	I1017 21:17:39.633109  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c: {Name:mk63d7546a5a1042c9b899492162b207f9dfbd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633200  830770 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt
	I1017 21:17:39.633291  830770 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key
	I1017 21:17:39.633351  830770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key
	I1017 21:17:39.633372  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt with IP's: []
	I1017 21:17:39.776961  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt ...
	I1017 21:17:39.776988  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt: {Name:mk568020e3c894822912675278ba0a7cb00e1d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777165  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key ...
	I1017 21:17:39.777178  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key: {Name:mk2a6796c8f93ee4a1075bf9a9a8896dad2c6071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777358  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:17:39.777404  830770 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:17:39.777418  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:17:39.777442  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:17:39.777471  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:17:39.777529  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:17:39.777577  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:39.778142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:17:39.797135  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:17:39.815142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:17:39.833747  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:17:39.852420  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:17:39.874045  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:17:39.892689  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:17:39.910417  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:17:39.932877  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:17:39.952735  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:17:39.971022  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:17:39.989320  830770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:17:40.002762  830770 ssh_runner.go:195] Run: openssl version
	I1017 21:17:40.026952  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:17:40.039691  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046264  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046382  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.089166  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:17:40.098476  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:17:40.107567  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.111936  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.112004  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.156191  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:17:40.165471  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:17:40.174450  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179199  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179352  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.222124  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:17:40.230675  830770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:17:40.234509  830770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:17:40.234592  830770 kubeadm.go:400] StartCluster: {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:40.234704  830770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:17:40.234766  830770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:17:40.262504  830770 cri.go:89] found id: ""
	I1017 21:17:40.262583  830770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:17:40.270697  830770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:17:40.278891  830770 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:17:40.278997  830770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:17:40.287270  830770 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:17:40.287340  830770 kubeadm.go:157] found existing configuration files:
	
	I1017 21:17:40.287407  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 21:17:40.295472  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:17:40.295584  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:17:40.302811  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 21:17:40.310442  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:17:40.310505  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:17:40.317875  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.326500  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:17:40.326593  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.334533  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 21:17:40.342514  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:17:40.342664  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:17:40.350582  830770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:17:40.399498  830770 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:17:40.399775  830770 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:17:40.425148  830770 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:17:40.425613  830770 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:17:40.425689  830770 kubeadm.go:318] OS: Linux
	I1017 21:17:40.425766  830770 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:17:40.425854  830770 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:17:40.425938  830770 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:17:40.426015  830770 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:17:40.426105  830770 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:17:40.426195  830770 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:17:40.426282  830770 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:17:40.426370  830770 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:17:40.426455  830770 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:17:40.497867  830770 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:17:40.498029  830770 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:17:40.498154  830770 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:17:40.511576  830770 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 21:17:40.516931  830770 out.go:252]   - Generating certificates and keys ...
	I1017 21:17:40.517063  830770 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:17:40.517194  830770 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1017 21:17:37.945787  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:39.946831  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:41.008815  830770 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:17:41.303554  830770 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:17:41.808832  830770 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:17:41.881243  830770 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:17:43.445089  830770 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:17:43.445443  830770 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.061713  830770 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:17:44.062066  830770 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.362183  830770 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:17:44.828686  830770 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:17:45.117371  830770 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:17:45.117460  830770 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1017 21:17:41.948873  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:44.448598  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:46.946545  827198 pod_ready.go:94] pod "coredns-66bc5c9577-nvmzl" is "Ready"
	I1017 21:17:46.946568  827198 pod_ready.go:86] duration metric: took 40.506182733s for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.950489  827198 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.958748  827198 pod_ready.go:94] pod "etcd-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.958770  827198 pod_ready.go:86] duration metric: took 8.257866ms for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.968348  827198 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.986155  827198 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.986253  827198 pod_ready.go:86] duration metric: took 17.869473ms for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.989889  827198 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.145095  827198 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:47.145179  827198 pod_ready.go:86] duration metric: took 155.195762ms for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.345096  827198 pod_ready.go:83] waiting for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.744574  827198 pod_ready.go:94] pod "kube-proxy-rh2gh" is "Ready"
	I1017 21:17:47.744651  827198 pod_ready.go:86] duration metric: took 399.477512ms for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.944362  827198 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345130  827198 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:48.345158  827198 pod_ready.go:86] duration metric: took 400.767186ms for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345171  827198 pod_ready.go:40] duration metric: took 41.909274783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:48.445850  827198 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:17:48.449135  827198 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-332023" cluster and "default" namespace by default
	I1017 21:17:45.974439  830770 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:17:46.193541  830770 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:17:47.888679  830770 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:17:48.619292  830770 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:17:49.045086  830770 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:17:49.045679  830770 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:17:49.048322  830770 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:17:49.051470  830770 out.go:252]   - Booting up control plane ...
	I1017 21:17:49.051575  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:17:49.051661  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:17:49.052583  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:17:49.068158  830770 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:17:49.069121  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:17:49.077471  830770 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:17:49.078223  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:17:49.078478  830770 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:17:49.219544  830770 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:17:49.219687  830770 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:17:50.723305  830770 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501459653s
	I1017 21:17:50.723755  830770 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:17:50.724084  830770 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 21:17:50.724217  830770 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:17:50.724309  830770 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:17:52.994819  830770 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.270025456s
	I1017 21:17:54.905875  830770 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.182022258s
	I1017 21:17:56.725976  830770 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001885509s
	I1017 21:17:56.747203  830770 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:17:56.762458  830770 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:17:56.776965  830770 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:17:56.777180  830770 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-229231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:17:56.788814  830770 kubeadm.go:318] [bootstrap-token] Using token: dfhkce.8y881vui82au3otr
	I1017 21:17:56.793729  830770 out.go:252]   - Configuring RBAC rules ...
	I1017 21:17:56.793864  830770 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:17:56.795886  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:17:56.803738  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:17:56.807710  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:17:56.811280  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:17:56.817302  830770 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:17:57.133021  830770 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:17:57.620314  830770 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:17:58.133883  830770 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:17:58.135079  830770 kubeadm.go:318] 
	I1017 21:17:58.135223  830770 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:17:58.135249  830770 kubeadm.go:318] 
	I1017 21:17:58.135376  830770 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:17:58.135391  830770 kubeadm.go:318] 
	I1017 21:17:58.135426  830770 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:17:58.135513  830770 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:17:58.135578  830770 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:17:58.135609  830770 kubeadm.go:318] 
	I1017 21:17:58.135673  830770 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:17:58.135682  830770 kubeadm.go:318] 
	I1017 21:17:58.135736  830770 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:17:58.135745  830770 kubeadm.go:318] 
	I1017 21:17:58.135810  830770 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:17:58.135913  830770 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:17:58.136012  830770 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:17:58.136022  830770 kubeadm.go:318] 
	I1017 21:17:58.136140  830770 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:17:58.136263  830770 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:17:58.136273  830770 kubeadm.go:318] 
	I1017 21:17:58.136379  830770 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136500  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:17:58.136527  830770 kubeadm.go:318] 	--control-plane 
	I1017 21:17:58.136535  830770 kubeadm.go:318] 
	I1017 21:17:58.136643  830770 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:17:58.136657  830770 kubeadm.go:318] 
	I1017 21:17:58.136764  830770 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136882  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:17:58.140499  830770 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:17:58.140743  830770 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:17:58.140859  830770 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 21:17:58.140880  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:58.140888  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:58.145921  830770 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:17:58.148955  830770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:17:58.153324  830770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:17:58.153351  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:17:58.167635  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:17:58.518826  830770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:17:58.518991  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:58.519168  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-229231 minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=newest-cni-229231 minikube.k8s.io/primary=true
	I1017 21:17:58.757619  830770 ops.go:34] apiserver oom_adj: -16
	I1017 21:17:58.757777  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.258478  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.758332  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.259233  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.758014  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.258315  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.758342  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.258499  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.408060  830770 kubeadm.go:1113] duration metric: took 3.889134071s to wait for elevateKubeSystemPrivileges
	I1017 21:18:02.408087  830770 kubeadm.go:402] duration metric: took 22.173501106s to StartCluster
	I1017 21:18:02.408103  830770 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.408166  830770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:02.409084  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.409291  830770 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:18:02.409441  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:18:02.409691  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:02.409721  830770 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:18:02.409781  830770 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229231"
	I1017 21:18:02.409794  830770 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-229231"
	I1017 21:18:02.409817  830770 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:02.410337  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.410858  830770 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229231"
	I1017 21:18:02.410878  830770 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229231"
	I1017 21:18:02.411151  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.415302  830770 out.go:179] * Verifying Kubernetes components...
	I1017 21:18:02.419994  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:02.446892  830770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.681435011Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=194b74da-a30d-4284-84f2-01ec50072cab name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.687193482Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7071275b-a5e9-4a1c-92d1-ea4b3d8a00c2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.68763663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.701621935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.701939905Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ee0052b12f105426fdb8f509f18467860521dc8648e7ad4b9f9f0cdd0c8abe68/merged/etc/passwd: no such file or directory"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.702037424Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee0052b12f105426fdb8f509f18467860521dc8648e7ad4b9f9f0cdd0c8abe68/merged/etc/group: no such file or directory"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.702376587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.735933868Z" level=info msg="Created container c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77: kube-system/storage-provisioner/storage-provisioner" id=7071275b-a5e9-4a1c-92d1-ea4b3d8a00c2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.737137222Z" level=info msg="Starting container: c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77" id=21ee07a8-e4e3-4280-90d2-269b1ec754f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.745405796Z" level=info msg="Started container" PID=1649 containerID=c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77 description=kube-system/storage-provisioner/storage-provisioner id=21ee07a8-e4e3-4280-90d2-269b1ec754f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=491f44e869e289cc544300246343fcdbfb8f5f243cb565e21e7c8b25bcb4a156
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.441142899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447392667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447433299Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447454723Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451725209Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451757603Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451777993Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.457093801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.45713431Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.457154774Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466346387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466523833Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466611424Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.47016442Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.470317407Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c5bb18bea6bbf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           29 seconds ago       Running             storage-provisioner         2                   491f44e869e28       storage-provisioner                                    kube-system
	807710bf71556       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           31 seconds ago       Exited              dashboard-metrics-scraper   2                   a1cdff09e3159       dashboard-metrics-scraper-6ffb444bf9-fb94s             kubernetes-dashboard
	c0cc1e2037d3c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   43 seconds ago       Running             kubernetes-dashboard        0                   839195d2bcfa8       kubernetes-dashboard-855c9754f9-vh6cd                  kubernetes-dashboard
	1a5a39b4bbd63       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           59 seconds ago       Running             coredns                     1                   d914af05362ba       coredns-66bc5c9577-nvmzl                               kube-system
	a1dce463c036c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           59 seconds ago       Running             kube-proxy                  1                   fd3bbb2a5db19       kube-proxy-rh2gh                                       kube-system
	3f873d142aed9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   8c2f97d66af4d       busybox                                                default
	6408ebc2296fb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   491f44e869e28       storage-provisioner                                    kube-system
	98f498aab5461       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   2c067780cbe01       kindnet-29xbg                                          kube-system
	3a60d48c2bf86       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   35499a9197167       kube-scheduler-default-k8s-diff-port-332023            kube-system
	dc48eb2f630d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6370ffc10ce5a       kube-apiserver-default-k8s-diff-port-332023            kube-system
	3da362d3cd0b8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4a04c552e6538       kube-controller-manager-default-k8s-diff-port-332023   kube-system
	da7022bc37b90       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d0381e93b6fbe       etcd-default-k8s-diff-port-332023                      kube-system
	
	
	==> coredns [1a5a39b4bbd639ee14240f6d0ab58f5317fd0a79c4d8b4ca7f73c246bd827c65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49043 - 50526 "HINFO IN 5498312436900605418.8624572342813003945. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022837516s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-332023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-332023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-332023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-332023
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-332023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e5730e8f-3fc7-4fd8-9c01-a78f58d462d6
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-nvmzl                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-332023                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-29xbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-332023             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-332023    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-rh2gh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-332023             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fb94s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vh6cd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m21s                  kube-proxy       
	  Normal   Starting                 59s                    kube-proxy       
	  Warning  CgroupV1                 2m36s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m35s (x8 over 2m35s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m28s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m28s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m27s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m23s                  node-controller  Node default-k8s-diff-port-332023 event: Registered Node default-k8s-diff-port-332023 in Controller
	  Normal   NodeReady                101s                   kubelet          Node default-k8s-diff-port-332023 status is now: NodeReady
	  Normal   Starting                 67s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 67s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                    node-controller  Node default-k8s-diff-port-332023 event: Registered Node default-k8s-diff-port-332023 in Controller
	
	
	==> dmesg <==
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	[Oct17 21:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5] <==
	{"level":"warn","ts":"2025-10-17T21:17:01.680337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.703993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.728605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.747848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.770197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.789037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.799563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.827998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.901834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.920423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.942549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.974077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.004946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.028172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.059067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.097779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.152661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.199674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.266009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.321927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.397316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.418255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.462875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.527190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.817205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:05 up  4:00,  0 user,  load average: 3.95, 3.69, 3.26
	Linux default-k8s-diff-port-332023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98f498aab54612da243b71a7d5b7189c25ffc04ef6e6f4d23431cb88b69ee3f9] <==
	I1017 21:17:05.230279       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:17:05.235606       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:17:05.235860       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:17:05.235912       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:17:05.235950       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:17:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:17:05.443641       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:17:05.443669       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:17:05.443679       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:17:05.443811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:17:35.437717       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:17:35.437842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:17:35.437940       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:17:35.446280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1017 21:17:36.544648       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:17:36.544742       1 metrics.go:72] Registering metrics
	I1017 21:17:36.544876       1 controller.go:711] "Syncing nftables rules"
	I1017 21:17:45.440519       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:17:45.440587       1 main.go:301] handling current node
	I1017 21:17:55.436562       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:17:55.436608       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7] <==
	I1017 21:17:04.066842       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:17:04.168575       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:17:04.170771       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:17:04.177026       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:17:04.251766       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:17:04.251903       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:17:04.252177       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:17:04.252186       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:17:04.252315       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 21:17:04.252344       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:17:04.263781       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:17:04.264758       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:17:04.288164       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 21:17:04.350342       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:17:04.431731       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:17:04.685186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:17:05.216266       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:17:05.418116       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:17:05.559147       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:17:05.630316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:17:05.833134       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.210.115"}
	I1017 21:17:05.880739       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.48.8"}
	I1017 21:17:08.436185       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:17:08.584573       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:17:08.685505       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18] <==
	I1017 21:17:08.137277       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:17:08.139715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:17:08.139869       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:17:08.143445       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:17:08.144643       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:17:08.144661       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:17:08.144670       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:17:08.147722       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:17:08.156080       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:17:08.166392       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 21:17:08.167470       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:17:08.169543       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:17:08.172828       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 21:17:08.175749       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:17:08.179038       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 21:17:08.179054       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:17:08.179083       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:17:08.179097       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 21:17:08.179155       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:17:08.179167       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:17:08.179472       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:17:08.179550       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-332023"
	I1017 21:17:08.179592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 21:17:08.179868       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:17:08.188349       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [a1dce463c036c682d336727ebc5030e6c9acb8a703ec87097c08b12c202fc8bb] <==
	I1017 21:17:05.582707       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:17:05.934657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:17:06.037559       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:17:06.037606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:17:06.037695       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:17:06.062429       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:17:06.062545       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:17:06.067320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:17:06.067744       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:17:06.067948       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:17:06.069306       1 config.go:200] "Starting service config controller"
	I1017 21:17:06.069367       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:17:06.069413       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:17:06.069441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:17:06.069476       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:17:06.069502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:17:06.070331       1 config.go:309] "Starting node config controller"
	I1017 21:17:06.070410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:17:06.070440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:17:06.169638       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:17:06.169742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:17:06.169769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b] <==
	I1017 21:17:03.896050       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:17:05.936834       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:17:05.939190       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:17:05.948039       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:17:05.948189       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:17:05.948249       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:17:05.948306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:17:05.950950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:05.951050       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:05.951256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:17:05.951303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:17:06.048712       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:17:06.051236       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:06.051493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:17:08 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:08.932703     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5a545536-453a-4470-8fae-376f46bef39c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vh6cd\" (UID: \"5a545536-453a-4470-8fae-376f46bef39c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd"
	Oct 17 21:17:08 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:08.932724     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mft75\" (UniqueName: \"kubernetes.io/projected/5a545536-453a-4470-8fae-376f46bef39c-kube-api-access-mft75\") pod \"kubernetes-dashboard-855c9754f9-vh6cd\" (UID: \"5a545536-453a-4470-8fae-376f46bef39c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd"
	Oct 17 21:17:10 default-k8s-diff-port-332023 kubelet[780]: W1017 21:17:10.046590     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24 WatchSource:0}: Error finding container a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24: Status 404 returned error can't find the container with id a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24
	Oct 17 21:17:10 default-k8s-diff-port-332023 kubelet[780]: W1017 21:17:10.064479     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38 WatchSource:0}: Error finding container 839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38: Status 404 returned error can't find the container with id 839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38
	Oct 17 21:17:14 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:14.580559     780 scope.go:117] "RemoveContainer" containerID="199d29d062144a69f82ba65343d87888eec61a438c794d6591b76416c0aca338"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:15.585310     780 scope.go:117] "RemoveContainer" containerID="199d29d062144a69f82ba65343d87888eec61a438c794d6591b76416c0aca338"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:15.585607     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:15.585751     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:16 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:16.591240     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:16 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:16.591847     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:20 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:20.011382     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:20 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:20.011582     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.368630     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.670376     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.670670     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:33.670815     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.690429     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd" podStartSLOduration=14.260991936 podStartE2EDuration="25.688986207s" podCreationTimestamp="2025-10-17 21:17:08 +0000 UTC" firstStartedPulling="2025-10-17 21:17:10.067393679 +0000 UTC m=+11.942145203" lastFinishedPulling="2025-10-17 21:17:21.49538795 +0000 UTC m=+23.370139474" observedRunningTime="2025-10-17 21:17:21.664910682 +0000 UTC m=+23.539662214" watchObservedRunningTime="2025-10-17 21:17:33.688986207 +0000 UTC m=+35.563737731"
	Oct 17 21:17:35 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:35.678776     780 scope.go:117] "RemoveContainer" containerID="6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9"
	Oct 17 21:17:40 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:40.011353     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:40 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:40.011613     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:53 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:53.368047     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:53 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:53.368691     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c0cc1e2037d3cbd63794dd670636bf547be7c76d91e90dc18187d5fc6258f357] <==
	2025/10/17 21:17:21 Using namespace: kubernetes-dashboard
	2025/10/17 21:17:21 Using in-cluster config to connect to apiserver
	2025/10/17 21:17:21 Using secret token for csrf signing
	2025/10/17 21:17:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:17:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:17:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:17:21 Generating JWE encryption key
	2025/10/17 21:17:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:17:22 Initializing JWE encryption key from synchronized object
	2025/10/17 21:17:22 Creating in-cluster Sidecar client
	2025/10/17 21:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:22 Serving insecurely on HTTP port: 9090
	2025/10/17 21:17:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:21 Starting overwatch
	
	
	==> storage-provisioner [6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9] <==
	I1017 21:17:05.403537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:17:35.405642       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77] <==
	W1017 21:17:35.794718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:39.250004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:43.510920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:47.110393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:50.164242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.186376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.196337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:53.196518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:17:53.196710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87!
	I1017 21:17:53.197918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80975992-6b56-4221-9d62-c0a1d9481647", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87 became leader
	W1017 21:17:53.209189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.218025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:53.297394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87!
	W1017 21:17:55.220651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:55.224893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:57.228748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:57.234161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:59.237148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:59.241640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:01.245952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:01.255830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:03.258610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:03.269465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:05.272179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:05.280189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023: exit status 2 (449.880658ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-332023
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-332023:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	        "Created": "2025-10-17T21:15:10.315339717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 827327,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:16:51.502994319Z",
	            "FinishedAt": "2025-10-17T21:16:50.519715874Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hostname",
	        "HostsPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/hosts",
	        "LogPath": "/var/lib/docker/containers/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98-json.log",
	        "Name": "/default-k8s-diff-port-332023",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-332023:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-332023",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98",
	                "LowerDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04f48ae74c6e27bcef0c493afa8ef4e0f808f20563387d11d6246795dfc4b557/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-332023",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-332023/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-332023",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-332023",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "decc947f5fe25636b1c4d22828a045cd833da65105c1af49ef1a4cd8aa343ec6",
	            "SandboxKey": "/var/run/docker/netns/decc947f5fe2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33859"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33860"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33863"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33861"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33862"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-332023": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:ec:5f:c8:c8:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "26a553f884b09380ce04b950347080a804cedc891493065a8f217a57e449901d",
	                    "EndpointID": "03deacc45da31451e520bf4e1621196d392192dce1fa4eade20b4afa7f7d06a4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-332023",
	                        "cbf8d10c5cde"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023: exit status 2 (436.281229ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-332023 logs -n 25: (1.683044096s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                                                                                                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-332023 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-332023 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:17:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:17:25.896638  830770 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:17:25.896817  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.896848  830770 out.go:374] Setting ErrFile to fd 2...
	I1017 21:17:25.896871  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.897169  830770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:17:25.897638  830770 out.go:368] Setting JSON to false
	I1017 21:17:25.898672  830770 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14392,"bootTime":1760721454,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:17:25.898772  830770 start.go:141] virtualization:  
	I1017 21:17:25.903008  830770 out.go:179] * [newest-cni-229231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:17:25.906552  830770 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:17:25.906622  830770 notify.go:220] Checking for updates...
	I1017 21:17:25.913253  830770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:17:25.916417  830770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:17:25.923280  830770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:17:25.926423  830770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:17:25.929564  830770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:17:25.933973  830770 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:25.934131  830770 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:17:25.970809  830770 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:17:25.970940  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.033338  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.022814652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.033459  830770 docker.go:318] overlay module found
	I1017 21:17:26.036699  830770 out.go:179] * Using the docker driver based on user configuration
	I1017 21:17:26.039590  830770 start.go:305] selected driver: docker
	I1017 21:17:26.039616  830770 start.go:925] validating driver "docker" against <nil>
	I1017 21:17:26.039631  830770 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:17:26.040431  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.110477  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.100667774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.110672  830770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 21:17:26.110712  830770 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 21:17:26.110983  830770 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:17:26.113917  830770 out.go:179] * Using Docker driver with root privileges
	I1017 21:17:26.116863  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:26.116938  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:26.116952  830770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:17:26.117030  830770 start.go:349] cluster config:
	{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:26.120091  830770 out.go:179] * Starting "newest-cni-229231" primary control-plane node in "newest-cni-229231" cluster
	I1017 21:17:26.122942  830770 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:17:26.125973  830770 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:17:26.128763  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.128824  830770 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:17:26.128857  830770 cache.go:58] Caching tarball of preloaded images
	I1017 21:17:26.128860  830770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:17:26.128952  830770 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:17:26.128962  830770 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:17:26.129077  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:26.129096  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json: {Name:mk4a965455fc1745973969f97e2671685387c291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:26.148441  830770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:17:26.148467  830770 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:17:26.148486  830770 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:17:26.148514  830770 start.go:360] acquireMachinesLock for newest-cni-229231: {Name:mk13ee1c4f50a5b33a03132c2a1b074ef28a6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:17:26.148627  830770 start.go:364] duration metric: took 90.635µs to acquireMachinesLock for "newest-cni-229231"
	I1017 21:17:26.148659  830770 start.go:93] Provisioning new machine with config: &{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:17:26.148730  830770 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:17:22.446843  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:24.447029  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:26.152341  830770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:17:26.152619  830770 start.go:159] libmachine.API.Create for "newest-cni-229231" (driver="docker")
	I1017 21:17:26.152666  830770 client.go:168] LocalClient.Create starting
	I1017 21:17:26.152739  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:17:26.152774  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152788  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.152843  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:17:26.152867  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152881  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.153247  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:17:26.169488  830770 cli_runner.go:211] docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:17:26.169583  830770 network_create.go:284] running [docker network inspect newest-cni-229231] to gather additional debugging logs...
	I1017 21:17:26.169604  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231
	W1017 21:17:26.186172  830770 cli_runner.go:211] docker network inspect newest-cni-229231 returned with exit code 1
	I1017 21:17:26.186205  830770 network_create.go:287] error running [docker network inspect newest-cni-229231]: docker network inspect newest-cni-229231: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-229231 not found
	I1017 21:17:26.186220  830770 network_create.go:289] output of [docker network inspect newest-cni-229231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-229231 not found
	
	** /stderr **
	I1017 21:17:26.186329  830770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:26.204008  830770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:17:26.204497  830770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:17:26.204816  830770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:17:26.205333  830770 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d7480}
	I1017 21:17:26.205356  830770 network_create.go:124] attempt to create docker network newest-cni-229231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 21:17:26.205416  830770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-229231 newest-cni-229231
	I1017 21:17:26.265199  830770 network_create.go:108] docker network newest-cni-229231 192.168.76.0/24 created
	I1017 21:17:26.265233  830770 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-229231" container
	I1017 21:17:26.265306  830770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:17:26.281495  830770 cli_runner.go:164] Run: docker volume create newest-cni-229231 --label name.minikube.sigs.k8s.io=newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:17:26.300695  830770 oci.go:103] Successfully created a docker volume newest-cni-229231
	I1017 21:17:26.300795  830770 cli_runner.go:164] Run: docker run --rm --name newest-cni-229231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --entrypoint /usr/bin/test -v newest-cni-229231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:17:26.857294  830770 oci.go:107] Successfully prepared a docker volume newest-cni-229231
	I1017 21:17:26.857355  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.857375  830770 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:17:26.857450  830770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:17:26.447334  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:28.447522  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:30.946881  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:31.268041  830770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.410549994s)
	I1017 21:17:31.268068  830770 kic.go:203] duration metric: took 4.410689736s to extract preloaded images to volume ...
	W1017 21:17:31.268210  830770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:17:31.268320  830770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:17:31.323431  830770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-229231 --name newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-229231 --network newest-cni-229231 --ip 192.168.76.2 --volume newest-cni-229231:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:17:31.638855  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Running}}
	I1017 21:17:31.661125  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.686266  830770 cli_runner.go:164] Run: docker exec newest-cni-229231 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:17:31.737826  830770 oci.go:144] the created container "newest-cni-229231" has a running status.
	I1017 21:17:31.737867  830770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa...
	I1017 21:17:31.921397  830770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:17:31.946133  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.973908  830770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:17:31.974143  830770 kic_runner.go:114] Args: [docker exec --privileged newest-cni-229231 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:17:32.031065  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:32.052790  830770 machine.go:93] provisionDockerMachine start ...
	I1017 21:17:32.052900  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:32.076724  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:32.077059  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:32.077077  830770 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:17:32.077758  830770 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:17:35.230904  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.230933  830770 ubuntu.go:182] provisioning hostname "newest-cni-229231"
	I1017 21:17:35.230996  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.249668  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.249988  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.250000  830770 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229231 && echo "newest-cni-229231" | sudo tee /etc/hostname
	I1017 21:17:35.413958  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.414035  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.435057  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.435455  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.435488  830770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229231/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:17:35.591708  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:17:35.591799  830770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:17:35.591862  830770 ubuntu.go:190] setting up certificates
	I1017 21:17:35.591895  830770 provision.go:84] configureAuth start
	I1017 21:17:35.591995  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:35.609144  830770 provision.go:143] copyHostCerts
	I1017 21:17:35.609297  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:17:35.609316  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:17:35.609405  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:17:35.609552  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:17:35.609557  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:17:35.609584  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:17:35.609634  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:17:35.609639  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:17:35.609661  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:17:35.609709  830770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229231 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-229231]
	W1017 21:17:33.446457  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:35.448550  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:35.925977  830770 provision.go:177] copyRemoteCerts
	I1017 21:17:35.926048  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:17:35.926101  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.946877  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.055784  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:17:36.078722  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:17:36.099207  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 21:17:36.117673  830770 provision.go:87] duration metric: took 525.737204ms to configureAuth
	I1017 21:17:36.117698  830770 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:17:36.117893  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:36.118005  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.135299  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:36.135606  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:36.135628  830770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:17:36.529551  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:17:36.529577  830770 machine.go:96] duration metric: took 4.476759507s to provisionDockerMachine
	I1017 21:17:36.529587  830770 client.go:171] duration metric: took 10.376909381s to LocalClient.Create
	I1017 21:17:36.529600  830770 start.go:167] duration metric: took 10.3769818s to libmachine.API.Create "newest-cni-229231"
	I1017 21:17:36.529608  830770 start.go:293] postStartSetup for "newest-cni-229231" (driver="docker")
	I1017 21:17:36.529622  830770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:17:36.529693  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:17:36.529734  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.548863  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.656261  830770 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:17:36.659923  830770 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:17:36.659952  830770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:17:36.659963  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:17:36.660021  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:17:36.660120  830770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:17:36.660228  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:17:36.668164  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:36.700480  830770 start.go:296] duration metric: took 170.852961ms for postStartSetup
	I1017 21:17:36.700912  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.718213  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:36.718503  830770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:17:36.718554  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.736166  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.840599  830770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:17:36.845564  830770 start.go:128] duration metric: took 10.696817914s to createHost
	I1017 21:17:36.845590  830770 start.go:83] releasing machines lock for "newest-cni-229231", held for 10.69694859s
	I1017 21:17:36.845663  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.870955  830770 ssh_runner.go:195] Run: cat /version.json
	I1017 21:17:36.871006  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.871064  830770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:17:36.871169  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.890159  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.893006  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:37.122070  830770 ssh_runner.go:195] Run: systemctl --version
	I1017 21:17:37.128670  830770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:17:37.168991  830770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:17:37.173298  830770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:17:37.173394  830770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:17:37.205394  830770 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:17:37.205430  830770 start.go:495] detecting cgroup driver to use...
	I1017 21:17:37.205465  830770 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:17:37.205526  830770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:17:37.224366  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:17:37.237784  830770 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:17:37.237852  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:17:37.256543  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:17:37.277284  830770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:17:37.414655  830770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:17:37.541005  830770 docker.go:234] disabling docker service ...
	I1017 21:17:37.541103  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:17:37.564030  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:17:37.577238  830770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:17:37.700062  830770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:17:37.826057  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:17:37.839922  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:17:37.860612  830770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:17:37.860715  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.870163  830770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:17:37.870267  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.879740  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.888727  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.897649  830770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:17:37.905938  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.914957  830770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.937609  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.948540  830770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:17:37.957167  830770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:17:37.964892  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.099253  830770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:17:38.240063  830770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:17:38.240136  830770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:17:38.244811  830770 start.go:563] Will wait 60s for crictl version
	I1017 21:17:38.244925  830770 ssh_runner.go:195] Run: which crictl
	I1017 21:17:38.249281  830770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:17:38.275725  830770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:17:38.275883  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.308570  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.340791  830770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:17:38.343764  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:38.370087  830770 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:17:38.375055  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.389020  830770 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 21:17:38.391841  830770 kubeadm.go:883] updating cluster {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:17:38.391994  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:38.392081  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.432568  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.432593  830770 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:17:38.432650  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.460578  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.460602  830770 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:17:38.460611  830770 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:17:38.460723  830770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-229231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:17:38.460816  830770 ssh_runner.go:195] Run: crio config
	I1017 21:17:38.516431  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:38.516456  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:38.516477  830770 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 21:17:38.516510  830770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229231 NodeName:newest-cni-229231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:17:38.516658  830770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229231"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:17:38.516740  830770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:17:38.526469  830770 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:17:38.526538  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:17:38.534666  830770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:17:38.547630  830770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:17:38.561449  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 21:17:38.575259  830770 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:17:38.578744  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.588729  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.701322  830770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:17:38.717843  830770 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231 for IP: 192.168.76.2
	I1017 21:17:38.717915  830770 certs.go:195] generating shared ca certs ...
	I1017 21:17:38.717946  830770 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:38.718125  830770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:17:38.718210  830770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:17:38.718245  830770 certs.go:257] generating profile certs ...
	I1017 21:17:38.718333  830770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key
	I1017 21:17:38.718359  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt with IP's: []
	I1017 21:17:39.230829  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt ...
	I1017 21:17:39.230858  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt: {Name:mk374f432cfcb8f38f0f3620aea987f930973189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231059  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key ...
	I1017 21:17:39.231074  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key: {Name:mk9a5a91826f85ec18ceb8bb2c0d21490d528c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231190  830770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c
	I1017 21:17:39.231212  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 21:17:39.632870  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c ...
	I1017 21:17:39.632901  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c: {Name:mk1fc1882cd3e285fbb7cde7fecc4a73bff5842b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633094  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c ...
	I1017 21:17:39.633109  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c: {Name:mk63d7546a5a1042c9b899492162b207f9dfbd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633200  830770 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt
	I1017 21:17:39.633291  830770 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key
	I1017 21:17:39.633351  830770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key
	I1017 21:17:39.633372  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt with IP's: []
	I1017 21:17:39.776961  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt ...
	I1017 21:17:39.776988  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt: {Name:mk568020e3c894822912675278ba0a7cb00e1d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777165  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key ...
	I1017 21:17:39.777178  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key: {Name:mk2a6796c8f93ee4a1075bf9a9a8896dad2c6071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777358  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:17:39.777404  830770 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:17:39.777418  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:17:39.777442  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:17:39.777471  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:17:39.777529  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:17:39.777577  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:39.778142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:17:39.797135  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:17:39.815142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:17:39.833747  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:17:39.852420  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:17:39.874045  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:17:39.892689  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:17:39.910417  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:17:39.932877  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:17:39.952735  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:17:39.971022  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:17:39.989320  830770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:17:40.002762  830770 ssh_runner.go:195] Run: openssl version
	I1017 21:17:40.026952  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:17:40.039691  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046264  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046382  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.089166  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:17:40.098476  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:17:40.107567  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.111936  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.112004  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.156191  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:17:40.165471  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:17:40.174450  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179199  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179352  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.222124  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:17:40.230675  830770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:17:40.234509  830770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:17:40.234592  830770 kubeadm.go:400] StartCluster: {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:40.234704  830770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:17:40.234766  830770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:17:40.262504  830770 cri.go:89] found id: ""
	I1017 21:17:40.262583  830770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:17:40.270697  830770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:17:40.278891  830770 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:17:40.278997  830770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:17:40.287270  830770 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:17:40.287340  830770 kubeadm.go:157] found existing configuration files:
	
	I1017 21:17:40.287407  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 21:17:40.295472  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:17:40.295584  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:17:40.302811  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 21:17:40.310442  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:17:40.310505  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:17:40.317875  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.326500  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:17:40.326593  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.334533  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 21:17:40.342514  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:17:40.342664  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:17:40.350582  830770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:17:40.399498  830770 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:17:40.399775  830770 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:17:40.425148  830770 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:17:40.425613  830770 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:17:40.425689  830770 kubeadm.go:318] OS: Linux
	I1017 21:17:40.425766  830770 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:17:40.425854  830770 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:17:40.425938  830770 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:17:40.426015  830770 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:17:40.426105  830770 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:17:40.426195  830770 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:17:40.426282  830770 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:17:40.426370  830770 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:17:40.426455  830770 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:17:40.497867  830770 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:17:40.498029  830770 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:17:40.498154  830770 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:17:40.511576  830770 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 21:17:40.516931  830770 out.go:252]   - Generating certificates and keys ...
	I1017 21:17:40.517063  830770 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:17:40.517194  830770 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1017 21:17:37.945787  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:39.946831  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:41.008815  830770 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:17:41.303554  830770 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:17:41.808832  830770 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:17:41.881243  830770 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:17:43.445089  830770 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:17:43.445443  830770 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.061713  830770 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:17:44.062066  830770 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.362183  830770 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:17:44.828686  830770 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:17:45.117371  830770 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:17:45.117460  830770 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1017 21:17:41.948873  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:44.448598  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:46.946545  827198 pod_ready.go:94] pod "coredns-66bc5c9577-nvmzl" is "Ready"
	I1017 21:17:46.946568  827198 pod_ready.go:86] duration metric: took 40.506182733s for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.950489  827198 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.958748  827198 pod_ready.go:94] pod "etcd-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.958770  827198 pod_ready.go:86] duration metric: took 8.257866ms for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.968348  827198 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.986155  827198 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.986253  827198 pod_ready.go:86] duration metric: took 17.869473ms for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.989889  827198 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.145095  827198 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:47.145179  827198 pod_ready.go:86] duration metric: took 155.195762ms for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.345096  827198 pod_ready.go:83] waiting for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.744574  827198 pod_ready.go:94] pod "kube-proxy-rh2gh" is "Ready"
	I1017 21:17:47.744651  827198 pod_ready.go:86] duration metric: took 399.477512ms for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.944362  827198 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345130  827198 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:48.345158  827198 pod_ready.go:86] duration metric: took 400.767186ms for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345171  827198 pod_ready.go:40] duration metric: took 41.909274783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:48.445850  827198 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:17:48.449135  827198 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-332023" cluster and "default" namespace by default
	I1017 21:17:45.974439  830770 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:17:46.193541  830770 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:17:47.888679  830770 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:17:48.619292  830770 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:17:49.045086  830770 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:17:49.045679  830770 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:17:49.048322  830770 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:17:49.051470  830770 out.go:252]   - Booting up control plane ...
	I1017 21:17:49.051575  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:17:49.051661  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:17:49.052583  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:17:49.068158  830770 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:17:49.069121  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:17:49.077471  830770 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:17:49.078223  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:17:49.078478  830770 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:17:49.219544  830770 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:17:49.219687  830770 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:17:50.723305  830770 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501459653s
	I1017 21:17:50.723755  830770 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:17:50.724084  830770 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 21:17:50.724217  830770 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:17:50.724309  830770 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:17:52.994819  830770 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.270025456s
	I1017 21:17:54.905875  830770 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.182022258s
	I1017 21:17:56.725976  830770 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001885509s
	I1017 21:17:56.747203  830770 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:17:56.762458  830770 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:17:56.776965  830770 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:17:56.777180  830770 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-229231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:17:56.788814  830770 kubeadm.go:318] [bootstrap-token] Using token: dfhkce.8y881vui82au3otr
	I1017 21:17:56.793729  830770 out.go:252]   - Configuring RBAC rules ...
	I1017 21:17:56.793864  830770 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:17:56.795886  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:17:56.803738  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:17:56.807710  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:17:56.811280  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:17:56.817302  830770 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:17:57.133021  830770 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:17:57.620314  830770 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:17:58.133883  830770 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:17:58.135079  830770 kubeadm.go:318] 
	I1017 21:17:58.135223  830770 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:17:58.135249  830770 kubeadm.go:318] 
	I1017 21:17:58.135376  830770 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:17:58.135391  830770 kubeadm.go:318] 
	I1017 21:17:58.135426  830770 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:17:58.135513  830770 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:17:58.135578  830770 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:17:58.135609  830770 kubeadm.go:318] 
	I1017 21:17:58.135673  830770 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:17:58.135682  830770 kubeadm.go:318] 
	I1017 21:17:58.135736  830770 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:17:58.135745  830770 kubeadm.go:318] 
	I1017 21:17:58.135810  830770 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:17:58.135913  830770 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:17:58.136012  830770 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:17:58.136022  830770 kubeadm.go:318] 
	I1017 21:17:58.136140  830770 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:17:58.136263  830770 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:17:58.136273  830770 kubeadm.go:318] 
	I1017 21:17:58.136379  830770 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136500  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:17:58.136527  830770 kubeadm.go:318] 	--control-plane 
	I1017 21:17:58.136535  830770 kubeadm.go:318] 
	I1017 21:17:58.136643  830770 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:17:58.136657  830770 kubeadm.go:318] 
	I1017 21:17:58.136764  830770 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136882  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:17:58.140499  830770 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:17:58.140743  830770 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:17:58.140859  830770 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 21:17:58.140880  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:58.140888  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:58.145921  830770 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:17:58.148955  830770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:17:58.153324  830770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:17:58.153351  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:17:58.167635  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:17:58.518826  830770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:17:58.518991  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:58.519168  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-229231 minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=newest-cni-229231 minikube.k8s.io/primary=true
	I1017 21:17:58.757619  830770 ops.go:34] apiserver oom_adj: -16
	I1017 21:17:58.757777  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.258478  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.758332  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.259233  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.758014  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.258315  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.758342  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.258499  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.408060  830770 kubeadm.go:1113] duration metric: took 3.889134071s to wait for elevateKubeSystemPrivileges
	I1017 21:18:02.408087  830770 kubeadm.go:402] duration metric: took 22.173501106s to StartCluster
	I1017 21:18:02.408103  830770 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.408166  830770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:02.409084  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.409291  830770 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:18:02.409441  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:18:02.409691  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:02.409721  830770 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:18:02.409781  830770 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229231"
	I1017 21:18:02.409794  830770 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-229231"
	I1017 21:18:02.409817  830770 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:02.410337  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.410858  830770 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229231"
	I1017 21:18:02.410878  830770 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229231"
	I1017 21:18:02.411151  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.415302  830770 out.go:179] * Verifying Kubernetes components...
	I1017 21:18:02.419994  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:02.446892  830770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:18:02.449947  830770 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:02.449968  830770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:18:02.450036  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:02.460628  830770 addons.go:238] Setting addon default-storageclass=true in "newest-cni-229231"
	I1017 21:18:02.460665  830770 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:02.461161  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.496741  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:02.503299  830770 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:02.503318  830770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:18:02.503375  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:02.531519  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:02.794332  830770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:02.889822  830770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:02.891523  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:18:02.891630  830770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:04.148640  830770 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.256989443s)
	I1017 21:18:04.150180  830770 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:18:04.150231  830770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:18:04.150463  830770 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.258918501s)
	I1017 21:18:04.150481  830770 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 21:18:04.151214  830770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261365415s)
	I1017 21:18:04.154639  830770 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 21:18:04.159456  830770 addons.go:514] duration metric: took 1.749719902s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 21:18:04.193245  830770 api_server.go:72] duration metric: took 1.783925587s to wait for apiserver process to appear ...
	I1017 21:18:04.193274  830770 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:18:04.193296  830770 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:04.209580  830770 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:18:04.217350  830770 api_server.go:141] control plane version: v1.34.1
	I1017 21:18:04.217383  830770 api_server.go:131] duration metric: took 24.10087ms to wait for apiserver health ...
	I1017 21:18:04.217393  830770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:18:04.237242  830770 system_pods.go:59] 8 kube-system pods found
	I1017 21:18:04.237274  830770 system_pods.go:61] "coredns-66bc5c9577-zsbw9" [ab5b72a4-6a5d-4f98-9f27-a6b79f1c56cf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:04.237283  830770 system_pods.go:61] "etcd-newest-cni-229231" [1972c4be-a973-41cd-a7db-f940c7bfedcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:18:04.237291  830770 system_pods.go:61] "kindnet-lwztk" [1ce01431-d96e-4be0-aee9-f5172d35f7a0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 21:18:04.237296  830770 system_pods.go:61] "kube-apiserver-newest-cni-229231" [bc06de01-5287-4d5d-9c16-8917e6f62b6c] Running
	I1017 21:18:04.237303  830770 system_pods.go:61] "kube-controller-manager-newest-cni-229231" [62b40139-100e-4c66-827d-de841c45bc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:18:04.237309  830770 system_pods.go:61] "kube-proxy-ws4mh" [66800a1d-51bc-41d0-9811-463a149fc9cd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 21:18:04.237320  830770 system_pods.go:61] "kube-scheduler-newest-cni-229231" [2b082865-cbcb-428b-b44b-77e744c7e89b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:18:04.237327  830770 system_pods.go:61] "storage-provisioner" [8a3d6e07-be6d-445a-b6af-7ef77edb6905] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:04.237332  830770 system_pods.go:74] duration metric: took 19.93441ms to wait for pod list to return data ...
	I1017 21:18:04.237341  830770 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:18:04.254560  830770 default_sa.go:45] found service account: "default"
	I1017 21:18:04.254593  830770 default_sa.go:55] duration metric: took 17.245983ms for default service account to be created ...
	I1017 21:18:04.254606  830770 kubeadm.go:586] duration metric: took 1.84529353s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:04.254624  830770 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:18:04.271224  830770 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:18:04.271260  830770 node_conditions.go:123] node cpu capacity is 2
	I1017 21:18:04.271272  830770 node_conditions.go:105] duration metric: took 16.642694ms to run NodePressure ...
	I1017 21:18:04.271285  830770 start.go:241] waiting for startup goroutines ...
	I1017 21:18:04.654399  830770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-229231" context rescaled to 1 replicas
	I1017 21:18:04.654433  830770 start.go:246] waiting for cluster config update ...
	I1017 21:18:04.654447  830770 start.go:255] writing updated cluster config ...
	I1017 21:18:04.654749  830770 ssh_runner.go:195] Run: rm -f paused
	I1017 21:18:04.738789  830770 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:18:04.741993  830770 out.go:179] * Done! kubectl is now configured to use "newest-cni-229231" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.681435011Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=194b74da-a30d-4284-84f2-01ec50072cab name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.687193482Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7071275b-a5e9-4a1c-92d1-ea4b3d8a00c2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.68763663Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.701621935Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.701939905Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ee0052b12f105426fdb8f509f18467860521dc8648e7ad4b9f9f0cdd0c8abe68/merged/etc/passwd: no such file or directory"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.702037424Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ee0052b12f105426fdb8f509f18467860521dc8648e7ad4b9f9f0cdd0c8abe68/merged/etc/group: no such file or directory"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.702376587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.735933868Z" level=info msg="Created container c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77: kube-system/storage-provisioner/storage-provisioner" id=7071275b-a5e9-4a1c-92d1-ea4b3d8a00c2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.737137222Z" level=info msg="Starting container: c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77" id=21ee07a8-e4e3-4280-90d2-269b1ec754f4 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:17:35 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:35.745405796Z" level=info msg="Started container" PID=1649 containerID=c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77 description=kube-system/storage-provisioner/storage-provisioner id=21ee07a8-e4e3-4280-90d2-269b1ec754f4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=491f44e869e289cc544300246343fcdbfb8f5f243cb565e21e7c8b25bcb4a156
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.441142899Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447392667Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447433299Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.447454723Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451725209Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451757603Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.451777993Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.457093801Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.45713431Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.457154774Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466346387Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466523833Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.466611424Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.47016442Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 17 21:17:45 default-k8s-diff-port-332023 crio[653]: time="2025-10-17T21:17:45.470317407Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	c5bb18bea6bbf       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           32 seconds ago       Running             storage-provisioner         2                   491f44e869e28       storage-provisioner                                    kube-system
	807710bf71556       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   a1cdff09e3159       dashboard-metrics-scraper-6ffb444bf9-fb94s             kubernetes-dashboard
	c0cc1e2037d3c       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   46 seconds ago       Running             kubernetes-dashboard        0                   839195d2bcfa8       kubernetes-dashboard-855c9754f9-vh6cd                  kubernetes-dashboard
	1a5a39b4bbd63       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           About a minute ago   Running             coredns                     1                   d914af05362ba       coredns-66bc5c9577-nvmzl                               kube-system
	a1dce463c036c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           About a minute ago   Running             kube-proxy                  1                   fd3bbb2a5db19       kube-proxy-rh2gh                                       kube-system
	3f873d142aed9       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           About a minute ago   Running             busybox                     1                   8c2f97d66af4d       busybox                                                default
	6408ebc2296fb       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           About a minute ago   Exited              storage-provisioner         1                   491f44e869e28       storage-provisioner                                    kube-system
	98f498aab5461       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           About a minute ago   Running             kindnet-cni                 1                   2c067780cbe01       kindnet-29xbg                                          kube-system
	3a60d48c2bf86       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   35499a9197167       kube-scheduler-default-k8s-diff-port-332023            kube-system
	dc48eb2f630d9       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   6370ffc10ce5a       kube-apiserver-default-k8s-diff-port-332023            kube-system
	3da362d3cd0b8       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   4a04c552e6538       kube-controller-manager-default-k8s-diff-port-332023   kube-system
	da7022bc37b90       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   d0381e93b6fbe       etcd-default-k8s-diff-port-332023                      kube-system
	
	
	==> coredns [1a5a39b4bbd639ee14240f6d0ab58f5317fd0a79c4d8b4ca7f73c246bd827c65] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49043 - 50526 "HINFO IN 5498312436900605418.8624572342813003945. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022837516s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-332023
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-332023
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=default-k8s-diff-port-332023
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_15_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:15:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-332023
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:17:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:15:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 17 Oct 2025 21:17:55 +0000   Fri, 17 Oct 2025 21:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-332023
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                e5730e8f-3fc7-4fd8-9c01-a78f58d462d6
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 coredns-66bc5c9577-nvmzl                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m25s
	  kube-system                 etcd-default-k8s-diff-port-332023                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m30s
	  kube-system                 kindnet-29xbg                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-332023             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-332023    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m30s
	  kube-system                 kube-proxy-rh2gh                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-332023             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m32s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fb94s              0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-vh6cd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m23s                  kube-proxy       
	  Normal   Starting                 62s                    kube-proxy       
	  Warning  CgroupV1                 2m39s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m38s (x8 over 2m38s)  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s                  kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m26s                  node-controller  Node default-k8s-diff-port-332023 event: Registered Node default-k8s-diff-port-332023 in Controller
	  Normal   NodeReady                104s                   kubelet          Node default-k8s-diff-port-332023 status is now: NodeReady
	  Normal   Starting                 70s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 70s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x8 over 70s)      kubelet          Node default-k8s-diff-port-332023 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                    node-controller  Node default-k8s-diff-port-332023 event: Registered Node default-k8s-diff-port-332023 in Controller
	
	
	==> dmesg <==
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	[Oct17 21:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [da7022bc37b9079001cebffcce30795078a568620a213091c6644391444e39b5] <==
	{"level":"warn","ts":"2025-10-17T21:17:01.680337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36384","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.703993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.728605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.747848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.770197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.789037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.799563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.827998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.901834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.920423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.942549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:01.974077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.004946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.028172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.059067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.097779Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.152661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.199674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.266009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.321927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.397316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.418255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.462875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.527190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:02.817205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36886","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:08 up  4:00,  0 user,  load average: 3.71, 3.65, 3.25
	Linux default-k8s-diff-port-332023 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [98f498aab54612da243b71a7d5b7189c25ffc04ef6e6f4d23431cb88b69ee3f9] <==
	I1017 21:17:05.230279       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:17:05.235606       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1017 21:17:05.235860       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:17:05.235912       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:17:05.235950       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:17:05Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:17:05.443641       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:17:05.443669       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:17:05.443679       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:17:05.443811       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1017 21:17:35.437717       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1017 21:17:35.437842       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1017 21:17:35.437940       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1017 21:17:35.446280       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1017 21:17:36.544648       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1017 21:17:36.544742       1 metrics.go:72] Registering metrics
	I1017 21:17:36.544876       1 controller.go:711] "Syncing nftables rules"
	I1017 21:17:45.440519       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:17:45.440587       1 main.go:301] handling current node
	I1017 21:17:55.436562       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:17:55.436608       1 main.go:301] handling current node
	I1017 21:18:05.443256       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1017 21:18:05.443288       1 main.go:301] handling current node
	
	
	==> kube-apiserver [dc48eb2f630d98cda44fbb45750a56291c2c0e6ce4f26f5acd167f6fce7fccc7] <==
	I1017 21:17:04.066842       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:17:04.168575       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:17:04.170771       1 cache.go:39] Caches are synced for autoregister controller
	I1017 21:17:04.177026       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:17:04.251766       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:17:04.251903       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:17:04.252177       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:17:04.252186       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:17:04.252315       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1017 21:17:04.252344       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:17:04.263781       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1017 21:17:04.264758       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:17:04.288164       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1017 21:17:04.350342       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:17:04.431731       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:17:04.685186       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:17:05.216266       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:17:05.418116       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:17:05.559147       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:17:05.630316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:17:05.833134       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.210.115"}
	I1017 21:17:05.880739       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.48.8"}
	I1017 21:17:08.436185       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:17:08.584573       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:17:08.685505       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3da362d3cd0b88ee9cd6f7f59b338a298ffed9f85cc275a93faaad5af7fbba18] <==
	I1017 21:17:08.137277       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:17:08.139715       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:17:08.139869       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:17:08.143445       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:17:08.144643       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:17:08.144661       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:17:08.144670       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:17:08.147722       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1017 21:17:08.156080       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1017 21:17:08.166392       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 21:17:08.167470       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:17:08.169543       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:17:08.172828       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 21:17:08.175749       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:17:08.179038       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1017 21:17:08.179054       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:17:08.179083       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:17:08.179097       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1017 21:17:08.179155       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:17:08.179167       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1017 21:17:08.179472       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1017 21:17:08.179550       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-332023"
	I1017 21:17:08.179592       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 21:17:08.179868       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1017 21:17:08.188349       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	
	
	==> kube-proxy [a1dce463c036c682d336727ebc5030e6c9acb8a703ec87097c08b12c202fc8bb] <==
	I1017 21:17:05.582707       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:17:05.934657       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:17:06.037559       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:17:06.037606       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1017 21:17:06.037695       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:17:06.062429       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:17:06.062545       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:17:06.067320       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:17:06.067744       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:17:06.067948       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:17:06.069306       1 config.go:200] "Starting service config controller"
	I1017 21:17:06.069367       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:17:06.069413       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:17:06.069441       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:17:06.069476       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:17:06.069502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:17:06.070331       1 config.go:309] "Starting node config controller"
	I1017 21:17:06.070410       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:17:06.070440       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:17:06.169638       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:17:06.169742       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:17:06.169769       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3a60d48c2bf8602c286b042650ee20d9cdb7340131699320df3dd5591fada63b] <==
	I1017 21:17:03.896050       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:17:05.936834       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:17:05.939190       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:17:05.948039       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:17:05.948189       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:17:05.948249       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:17:05.948306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:17:05.950950       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:05.951050       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:05.951256       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:17:05.951303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:17:06.048712       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:17:06.051236       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:17:06.051493       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:17:08 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:08.932703     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5a545536-453a-4470-8fae-376f46bef39c-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-vh6cd\" (UID: \"5a545536-453a-4470-8fae-376f46bef39c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd"
	Oct 17 21:17:08 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:08.932724     780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mft75\" (UniqueName: \"kubernetes.io/projected/5a545536-453a-4470-8fae-376f46bef39c-kube-api-access-mft75\") pod \"kubernetes-dashboard-855c9754f9-vh6cd\" (UID: \"5a545536-453a-4470-8fae-376f46bef39c\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd"
	Oct 17 21:17:10 default-k8s-diff-port-332023 kubelet[780]: W1017 21:17:10.046590     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24 WatchSource:0}: Error finding container a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24: Status 404 returned error can't find the container with id a1cdff09e31599dd47acc3aab7aa04cc309e3b53e870324c674aeb49e843ee24
	Oct 17 21:17:10 default-k8s-diff-port-332023 kubelet[780]: W1017 21:17:10.064479     780 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cbf8d10c5cde581743426f6a0862b8b99ff15fb0193c0360e06d1c5ded211a98/crio-839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38 WatchSource:0}: Error finding container 839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38: Status 404 returned error can't find the container with id 839195d2bcfa807d7d83dc922b4b5991f940e30a4d526374c7c28a5071343e38
	Oct 17 21:17:14 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:14.580559     780 scope.go:117] "RemoveContainer" containerID="199d29d062144a69f82ba65343d87888eec61a438c794d6591b76416c0aca338"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:15.585310     780 scope.go:117] "RemoveContainer" containerID="199d29d062144a69f82ba65343d87888eec61a438c794d6591b76416c0aca338"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:15.585607     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:15 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:15.585751     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:16 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:16.591240     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:16 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:16.591847     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:20 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:20.011382     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:20 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:20.011582     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.368630     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.670376     780 scope.go:117] "RemoveContainer" containerID="673ad76edc3a3ccc0623be7ea13391136b5f618566d84a7f1020d9699971ef4d"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.670670     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:33.670815     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:33 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:33.690429     780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-vh6cd" podStartSLOduration=14.260991936 podStartE2EDuration="25.688986207s" podCreationTimestamp="2025-10-17 21:17:08 +0000 UTC" firstStartedPulling="2025-10-17 21:17:10.067393679 +0000 UTC m=+11.942145203" lastFinishedPulling="2025-10-17 21:17:21.49538795 +0000 UTC m=+23.370139474" observedRunningTime="2025-10-17 21:17:21.664910682 +0000 UTC m=+23.539662214" watchObservedRunningTime="2025-10-17 21:17:33.688986207 +0000 UTC m=+35.563737731"
	Oct 17 21:17:35 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:35.678776     780 scope.go:117] "RemoveContainer" containerID="6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9"
	Oct 17 21:17:40 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:40.011353     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:40 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:40.011613     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:17:53 default-k8s-diff-port-332023 kubelet[780]: I1017 21:17:53.368047     780 scope.go:117] "RemoveContainer" containerID="807710bf71556dd9decfced2b6074070d0f4d13689f3ec310a140859fdcd1142"
	Oct 17 21:17:53 default-k8s-diff-port-332023 kubelet[780]: E1017 21:17:53.368691     780 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fb94s_kubernetes-dashboard(adebb336-3658-4eb9-8e45-3cf9a251062e)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fb94s" podUID="adebb336-3658-4eb9-8e45-3cf9a251062e"
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:18:01 default-k8s-diff-port-332023 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c0cc1e2037d3cbd63794dd670636bf547be7c76d91e90dc18187d5fc6258f357] <==
	2025/10/17 21:17:21 Using namespace: kubernetes-dashboard
	2025/10/17 21:17:21 Using in-cluster config to connect to apiserver
	2025/10/17 21:17:21 Using secret token for csrf signing
	2025/10/17 21:17:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/17 21:17:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/17 21:17:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/17 21:17:21 Generating JWE encryption key
	2025/10/17 21:17:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/17 21:17:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/17 21:17:22 Initializing JWE encryption key from synchronized object
	2025/10/17 21:17:22 Creating in-cluster Sidecar client
	2025/10/17 21:17:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:22 Serving insecurely on HTTP port: 9090
	2025/10/17 21:17:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/17 21:17:21 Starting overwatch
	
	
	==> storage-provisioner [6408ebc2296fbbe70905b5b77e33d99e5de646373be4c1782bcdb4a6393035c9] <==
	I1017 21:17:05.403537       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1017 21:17:35.405642       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c5bb18bea6bbf8e578873467d866437089e2ce5bac9a8cf7a8ce30f64aa66b77] <==
	W1017 21:17:43.510920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:47.110393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:50.164242       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.186376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.196337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:53.196518       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1017 21:17:53.196710       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87!
	I1017 21:17:53.197918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"80975992-6b56-4221-9d62-c0a1d9481647", APIVersion:"v1", ResourceVersion:"682", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87 became leader
	W1017 21:17:53.209189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:53.218025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1017 21:17:53.297394       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-332023_e8153a66-06b6-4628-b3a9-0240e74f3e87!
	W1017 21:17:55.220651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:55.224893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:57.228748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:57.234161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:59.237148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:17:59.241640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:01.245952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:01.255830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:03.258610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:03.269465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:05.272179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:05.280189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:07.287906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1017 21:18:07.297374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023: exit status 2 (515.248097ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (8.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (339.339377ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-229231
helpers_test.go:243: (dbg) docker inspect newest-cni-229231:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	        "Created": "2025-10-17T21:17:31.336902083Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 831161,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:17:31.399788817Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hosts",
	        "LogPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4-json.log",
	        "Name": "/newest-cni-229231",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-229231:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-229231",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	                "LowerDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-229231",
	                "Source": "/var/lib/docker/volumes/newest-cni-229231/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-229231",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-229231",
	                "name.minikube.sigs.k8s.io": "newest-cni-229231",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "76d64ffc80a83edb7769e4a1c111beaff7be4233f5f5ad56154493cc517df407",
	            "SandboxKey": "/var/run/docker/netns/76d64ffc80a8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33864"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33865"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33868"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33866"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33867"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-229231": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ce:d9:86:c8:bf:74",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d7c3e6b3b2a4a2268a255e36474804d31f93559fc2897f501a551059144a9568",
	                    "EndpointID": "ba3d8465c5bd59e3f44ab6a64d54960d8f039ee3f0d558ee9760a394c2d42c33",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-229231",
	                        "b47c768b4eb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25: (1.461304838s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ delete  │ -p old-k8s-version-521710                                                                                                                                                                                                                     │ old-k8s-version-521710       │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:15 UTC │
	│ image   │ no-preload-820018 image list --format=json                                                                                                                                                                                                    │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │ 17 Oct 25 21:14 UTC │
	│ pause   │ -p no-preload-820018 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:14 UTC │                     │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p no-preload-820018                                                                                                                                                                                                                          │ no-preload-820018            │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                                                                                                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-332023 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-332023 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:17:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:17:25.896638  830770 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:17:25.896817  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.896848  830770 out.go:374] Setting ErrFile to fd 2...
	I1017 21:17:25.896871  830770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:17:25.897169  830770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:17:25.897638  830770 out.go:368] Setting JSON to false
	I1017 21:17:25.898672  830770 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14392,"bootTime":1760721454,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:17:25.898772  830770 start.go:141] virtualization:  
	I1017 21:17:25.903008  830770 out.go:179] * [newest-cni-229231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:17:25.906552  830770 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:17:25.906622  830770 notify.go:220] Checking for updates...
	I1017 21:17:25.913253  830770 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:17:25.916417  830770 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:17:25.923280  830770 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:17:25.926423  830770 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:17:25.929564  830770 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:17:25.933973  830770 config.go:182] Loaded profile config "default-k8s-diff-port-332023": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:25.934131  830770 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:17:25.970809  830770 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:17:25.970940  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.033338  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.022814652 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.033459  830770 docker.go:318] overlay module found
	I1017 21:17:26.036699  830770 out.go:179] * Using the docker driver based on user configuration
	I1017 21:17:26.039590  830770 start.go:305] selected driver: docker
	I1017 21:17:26.039616  830770 start.go:925] validating driver "docker" against <nil>
	I1017 21:17:26.039631  830770 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:17:26.040431  830770 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:17:26.110477  830770 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:17:26.100667774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:17:26.110672  830770 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1017 21:17:26.110712  830770 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1017 21:17:26.110983  830770 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:17:26.113917  830770 out.go:179] * Using Docker driver with root privileges
	I1017 21:17:26.116863  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:26.116938  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:26.116952  830770 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 21:17:26.117030  830770 start.go:349] cluster config:
	{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:26.120091  830770 out.go:179] * Starting "newest-cni-229231" primary control-plane node in "newest-cni-229231" cluster
	I1017 21:17:26.122942  830770 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:17:26.125973  830770 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:17:26.128763  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.128824  830770 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:17:26.128857  830770 cache.go:58] Caching tarball of preloaded images
	I1017 21:17:26.128860  830770 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:17:26.128952  830770 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:17:26.128962  830770 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:17:26.129077  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:26.129096  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json: {Name:mk4a965455fc1745973969f97e2671685387c291 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:26.148441  830770 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:17:26.148467  830770 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:17:26.148486  830770 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:17:26.148514  830770 start.go:360] acquireMachinesLock for newest-cni-229231: {Name:mk13ee1c4f50a5b33a03132c2a1b074ef28a6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:17:26.148627  830770 start.go:364] duration metric: took 90.635µs to acquireMachinesLock for "newest-cni-229231"
	I1017 21:17:26.148659  830770 start.go:93] Provisioning new machine with config: &{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:17:26.148730  830770 start.go:125] createHost starting for "" (driver="docker")
	W1017 21:17:22.446843  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:24.447029  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:26.152341  830770 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1017 21:17:26.152619  830770 start.go:159] libmachine.API.Create for "newest-cni-229231" (driver="docker")
	I1017 21:17:26.152666  830770 client.go:168] LocalClient.Create starting
	I1017 21:17:26.152739  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem
	I1017 21:17:26.152774  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152788  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.152843  830770 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem
	I1017 21:17:26.152867  830770 main.go:141] libmachine: Decoding PEM data...
	I1017 21:17:26.152881  830770 main.go:141] libmachine: Parsing certificate...
	I1017 21:17:26.153247  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1017 21:17:26.169488  830770 cli_runner.go:211] docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1017 21:17:26.169583  830770 network_create.go:284] running [docker network inspect newest-cni-229231] to gather additional debugging logs...
	I1017 21:17:26.169604  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231
	W1017 21:17:26.186172  830770 cli_runner.go:211] docker network inspect newest-cni-229231 returned with exit code 1
	I1017 21:17:26.186205  830770 network_create.go:287] error running [docker network inspect newest-cni-229231]: docker network inspect newest-cni-229231: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-229231 not found
	I1017 21:17:26.186220  830770 network_create.go:289] output of [docker network inspect newest-cni-229231]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-229231 not found
	
	** /stderr **
	I1017 21:17:26.186329  830770 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:26.204008  830770 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
	I1017 21:17:26.204497  830770 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e1d4ee53d906 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:2e:44:a1:ec:4b:79} reservation:<nil>}
	I1017 21:17:26.204816  830770 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-5f5184407966 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:06:d4:09:61:05:cf} reservation:<nil>}
	I1017 21:17:26.205333  830770 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019d7480}
	I1017 21:17:26.205356  830770 network_create.go:124] attempt to create docker network newest-cni-229231 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1017 21:17:26.205416  830770 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-229231 newest-cni-229231
	I1017 21:17:26.265199  830770 network_create.go:108] docker network newest-cni-229231 192.168.76.0/24 created
	I1017 21:17:26.265233  830770 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-229231" container
	I1017 21:17:26.265306  830770 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1017 21:17:26.281495  830770 cli_runner.go:164] Run: docker volume create newest-cni-229231 --label name.minikube.sigs.k8s.io=newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true
	I1017 21:17:26.300695  830770 oci.go:103] Successfully created a docker volume newest-cni-229231
	I1017 21:17:26.300795  830770 cli_runner.go:164] Run: docker run --rm --name newest-cni-229231-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --entrypoint /usr/bin/test -v newest-cni-229231:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -d /var/lib
	I1017 21:17:26.857294  830770 oci.go:107] Successfully prepared a docker volume newest-cni-229231
	I1017 21:17:26.857355  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:26.857375  830770 kic.go:194] Starting extracting preloaded images to volume ...
	I1017 21:17:26.857450  830770 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir
	W1017 21:17:26.447334  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:28.447522  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:30.946881  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:31.268041  830770 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-229231:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 -I lz4 -xf /preloaded.tar -C /extractDir: (4.410549994s)
	I1017 21:17:31.268068  830770 kic.go:203] duration metric: took 4.410689736s to extract preloaded images to volume ...
	W1017 21:17:31.268210  830770 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1017 21:17:31.268320  830770 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1017 21:17:31.323431  830770 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-229231 --name newest-cni-229231 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-229231 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-229231 --network newest-cni-229231 --ip 192.168.76.2 --volume newest-cni-229231:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6
	I1017 21:17:31.638855  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Running}}
	I1017 21:17:31.661125  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.686266  830770 cli_runner.go:164] Run: docker exec newest-cni-229231 stat /var/lib/dpkg/alternatives/iptables
	I1017 21:17:31.737826  830770 oci.go:144] the created container "newest-cni-229231" has a running status.
	I1017 21:17:31.737867  830770 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa...
	I1017 21:17:31.921397  830770 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1017 21:17:31.946133  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:31.973908  830770 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1017 21:17:31.974143  830770 kic_runner.go:114] Args: [docker exec --privileged newest-cni-229231 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1017 21:17:32.031065  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:17:32.052790  830770 machine.go:93] provisionDockerMachine start ...
	I1017 21:17:32.052900  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:32.076724  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:32.077059  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:32.077077  830770 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:17:32.077758  830770 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1017 21:17:35.230904  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.230933  830770 ubuntu.go:182] provisioning hostname "newest-cni-229231"
	I1017 21:17:35.230996  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.249668  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.249988  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.250000  830770 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229231 && echo "newest-cni-229231" | sudo tee /etc/hostname
	I1017 21:17:35.413958  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:17:35.414035  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.435057  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:35.435455  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:35.435488  830770 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229231/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:17:35.591708  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:17:35.591799  830770 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:17:35.591862  830770 ubuntu.go:190] setting up certificates
	I1017 21:17:35.591895  830770 provision.go:84] configureAuth start
	I1017 21:17:35.591995  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:35.609144  830770 provision.go:143] copyHostCerts
	I1017 21:17:35.609297  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:17:35.609316  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:17:35.609405  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:17:35.609552  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:17:35.609557  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:17:35.609584  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:17:35.609634  830770 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:17:35.609639  830770 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:17:35.609661  830770 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:17:35.609709  830770 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229231 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-229231]
	W1017 21:17:33.446457  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:35.448550  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:35.925977  830770 provision.go:177] copyRemoteCerts
	I1017 21:17:35.926048  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:17:35.926101  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:35.946877  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.055784  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:17:36.078722  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:17:36.099207  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1017 21:17:36.117673  830770 provision.go:87] duration metric: took 525.737204ms to configureAuth
	I1017 21:17:36.117698  830770 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:17:36.117893  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:17:36.118005  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.135299  830770 main.go:141] libmachine: Using SSH client type: native
	I1017 21:17:36.135606  830770 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33864 <nil> <nil>}
	I1017 21:17:36.135628  830770 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:17:36.529551  830770 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:17:36.529577  830770 machine.go:96] duration metric: took 4.476759507s to provisionDockerMachine
	I1017 21:17:36.529587  830770 client.go:171] duration metric: took 10.376909381s to LocalClient.Create
	I1017 21:17:36.529600  830770 start.go:167] duration metric: took 10.3769818s to libmachine.API.Create "newest-cni-229231"
	I1017 21:17:36.529608  830770 start.go:293] postStartSetup for "newest-cni-229231" (driver="docker")
	I1017 21:17:36.529622  830770 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:17:36.529693  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:17:36.529734  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.548863  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.656261  830770 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:17:36.659923  830770 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:17:36.659952  830770 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:17:36.659963  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:17:36.660021  830770 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:17:36.660120  830770 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:17:36.660228  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:17:36.668164  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:36.700480  830770 start.go:296] duration metric: took 170.852961ms for postStartSetup
	I1017 21:17:36.700912  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.718213  830770 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:17:36.718503  830770 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:17:36.718554  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.736166  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.840599  830770 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:17:36.845564  830770 start.go:128] duration metric: took 10.696817914s to createHost
	I1017 21:17:36.845590  830770 start.go:83] releasing machines lock for "newest-cni-229231", held for 10.69694859s
	I1017 21:17:36.845663  830770 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:17:36.870955  830770 ssh_runner.go:195] Run: cat /version.json
	I1017 21:17:36.871006  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.871064  830770 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:17:36.871169  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:17:36.890159  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:36.893006  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:17:37.122070  830770 ssh_runner.go:195] Run: systemctl --version
	I1017 21:17:37.128670  830770 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:17:37.168991  830770 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:17:37.173298  830770 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:17:37.173394  830770 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:17:37.205394  830770 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1017 21:17:37.205430  830770 start.go:495] detecting cgroup driver to use...
	I1017 21:17:37.205465  830770 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:17:37.205526  830770 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:17:37.224366  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:17:37.237784  830770 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:17:37.237852  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:17:37.256543  830770 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:17:37.277284  830770 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:17:37.414655  830770 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:17:37.541005  830770 docker.go:234] disabling docker service ...
	I1017 21:17:37.541103  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:17:37.564030  830770 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:17:37.577238  830770 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:17:37.700062  830770 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:17:37.826057  830770 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:17:37.839922  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:17:37.860612  830770 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:17:37.860715  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.870163  830770 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:17:37.870267  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.879740  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.888727  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.897649  830770 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:17:37.905938  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.914957  830770 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.937609  830770 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:17:37.948540  830770 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:17:37.957167  830770 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:17:37.964892  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.099253  830770 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:17:38.240063  830770 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:17:38.240136  830770 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:17:38.244811  830770 start.go:563] Will wait 60s for crictl version
	I1017 21:17:38.244925  830770 ssh_runner.go:195] Run: which crictl
	I1017 21:17:38.249281  830770 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:17:38.275725  830770 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:17:38.275883  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.308570  830770 ssh_runner.go:195] Run: crio --version
	I1017 21:17:38.340791  830770 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:17:38.343764  830770 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:17:38.370087  830770 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:17:38.375055  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.389020  830770 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 21:17:38.391841  830770 kubeadm.go:883] updating cluster {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:17:38.391994  830770 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:17:38.392081  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.432568  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.432593  830770 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:17:38.432650  830770 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:17:38.460578  830770 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:17:38.460602  830770 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:17:38.460611  830770 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:17:38.460723  830770 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-229231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:17:38.460816  830770 ssh_runner.go:195] Run: crio config
	I1017 21:17:38.516431  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:38.516456  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:38.516477  830770 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 21:17:38.516510  830770 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229231 NodeName:newest-cni-229231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:17:38.516658  830770 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229231"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:17:38.516740  830770 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:17:38.526469  830770 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:17:38.526538  830770 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:17:38.534666  830770 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:17:38.547630  830770 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:17:38.561449  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 21:17:38.575259  830770 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:17:38.578744  830770 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:17:38.588729  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:17:38.701322  830770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:17:38.717843  830770 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231 for IP: 192.168.76.2
	I1017 21:17:38.717915  830770 certs.go:195] generating shared ca certs ...
	I1017 21:17:38.717946  830770 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:38.718125  830770 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:17:38.718210  830770 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:17:38.718245  830770 certs.go:257] generating profile certs ...
	I1017 21:17:38.718333  830770 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key
	I1017 21:17:38.718359  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt with IP's: []
	I1017 21:17:39.230829  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt ...
	I1017 21:17:39.230858  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.crt: {Name:mk374f432cfcb8f38f0f3620aea987f930973189 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231059  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key ...
	I1017 21:17:39.231074  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key: {Name:mk9a5a91826f85ec18ceb8bb2c0d21490d528c4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.231190  830770 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c
	I1017 21:17:39.231212  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1017 21:17:39.632870  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c ...
	I1017 21:17:39.632901  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c: {Name:mk1fc1882cd3e285fbb7cde7fecc4a73bff5842b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633094  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c ...
	I1017 21:17:39.633109  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c: {Name:mk63d7546a5a1042c9b899492162b207f9dfbd3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.633200  830770 certs.go:382] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt
	I1017 21:17:39.633291  830770 certs.go:386] copying /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c -> /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key
	I1017 21:17:39.633351  830770 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key
	I1017 21:17:39.633372  830770 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt with IP's: []
	I1017 21:17:39.776961  830770 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt ...
	I1017 21:17:39.776988  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt: {Name:mk568020e3c894822912675278ba0a7cb00e1d52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777165  830770 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key ...
	I1017 21:17:39.777178  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key: {Name:mk2a6796c8f93ee4a1075bf9a9a8896dad2c6071 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:17:39.777358  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:17:39.777404  830770 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:17:39.777418  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:17:39.777442  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:17:39.777471  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:17:39.777529  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:17:39.777577  830770 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:17:39.778142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:17:39.797135  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:17:39.815142  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:17:39.833747  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:17:39.852420  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:17:39.874045  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:17:39.892689  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:17:39.910417  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:17:39.932877  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:17:39.952735  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:17:39.971022  830770 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:17:39.989320  830770 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:17:40.002762  830770 ssh_runner.go:195] Run: openssl version
	I1017 21:17:40.026952  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:17:40.039691  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046264  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.046382  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:17:40.089166  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:17:40.098476  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:17:40.107567  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.111936  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.112004  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:17:40.156191  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:17:40.165471  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:17:40.174450  830770 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179199  830770 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.179352  830770 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:17:40.222124  830770 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:17:40.230675  830770 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:17:40.234509  830770 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1017 21:17:40.234592  830770 kubeadm.go:400] StartCluster: {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:17:40.234704  830770 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:17:40.234766  830770 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:17:40.262504  830770 cri.go:89] found id: ""
	I1017 21:17:40.262583  830770 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:17:40.270697  830770 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1017 21:17:40.278891  830770 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1017 21:17:40.278997  830770 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1017 21:17:40.287270  830770 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1017 21:17:40.287340  830770 kubeadm.go:157] found existing configuration files:
	
	I1017 21:17:40.287407  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1017 21:17:40.295472  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1017 21:17:40.295584  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1017 21:17:40.302811  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1017 21:17:40.310442  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1017 21:17:40.310505  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1017 21:17:40.317875  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.326500  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1017 21:17:40.326593  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1017 21:17:40.334533  830770 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1017 21:17:40.342514  830770 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1017 21:17:40.342664  830770 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1017 21:17:40.350582  830770 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1017 21:17:40.399498  830770 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1017 21:17:40.399775  830770 kubeadm.go:318] [preflight] Running pre-flight checks
	I1017 21:17:40.425148  830770 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1017 21:17:40.425613  830770 kubeadm.go:318] KERNEL_VERSION: 5.15.0-1084-aws
	I1017 21:17:40.425689  830770 kubeadm.go:318] OS: Linux
	I1017 21:17:40.425766  830770 kubeadm.go:318] CGROUPS_CPU: enabled
	I1017 21:17:40.425854  830770 kubeadm.go:318] CGROUPS_CPUACCT: enabled
	I1017 21:17:40.425938  830770 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1017 21:17:40.426015  830770 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1017 21:17:40.426105  830770 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1017 21:17:40.426195  830770 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1017 21:17:40.426282  830770 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1017 21:17:40.426370  830770 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1017 21:17:40.426455  830770 kubeadm.go:318] CGROUPS_BLKIO: enabled
	I1017 21:17:40.497867  830770 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1017 21:17:40.498029  830770 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1017 21:17:40.498154  830770 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1017 21:17:40.511576  830770 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1017 21:17:40.516931  830770 out.go:252]   - Generating certificates and keys ...
	I1017 21:17:40.517063  830770 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1017 21:17:40.517194  830770 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	W1017 21:17:37.945787  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:39.946831  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:41.008815  830770 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1017 21:17:41.303554  830770 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1017 21:17:41.808832  830770 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1017 21:17:41.881243  830770 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1017 21:17:43.445089  830770 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1017 21:17:43.445443  830770 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.061713  830770 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1017 21:17:44.062066  830770 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-229231] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1017 21:17:44.362183  830770 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1017 21:17:44.828686  830770 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1017 21:17:45.117371  830770 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1017 21:17:45.117460  830770 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1017 21:17:41.948873  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	W1017 21:17:44.448598  827198 pod_ready.go:104] pod "coredns-66bc5c9577-nvmzl" is not "Ready", error: <nil>
	I1017 21:17:46.946545  827198 pod_ready.go:94] pod "coredns-66bc5c9577-nvmzl" is "Ready"
	I1017 21:17:46.946568  827198 pod_ready.go:86] duration metric: took 40.506182733s for pod "coredns-66bc5c9577-nvmzl" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.950489  827198 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.958748  827198 pod_ready.go:94] pod "etcd-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.958770  827198 pod_ready.go:86] duration metric: took 8.257866ms for pod "etcd-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.968348  827198 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.986155  827198 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:46.986253  827198 pod_ready.go:86] duration metric: took 17.869473ms for pod "kube-apiserver-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:46.989889  827198 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.145095  827198 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:47.145179  827198 pod_ready.go:86] duration metric: took 155.195762ms for pod "kube-controller-manager-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.345096  827198 pod_ready.go:83] waiting for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.744574  827198 pod_ready.go:94] pod "kube-proxy-rh2gh" is "Ready"
	I1017 21:17:47.744651  827198 pod_ready.go:86] duration metric: took 399.477512ms for pod "kube-proxy-rh2gh" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:47.944362  827198 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345130  827198 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-332023" is "Ready"
	I1017 21:17:48.345158  827198 pod_ready.go:86] duration metric: took 400.767186ms for pod "kube-scheduler-default-k8s-diff-port-332023" in "kube-system" namespace to be "Ready" or be gone ...
	I1017 21:17:48.345171  827198 pod_ready.go:40] duration metric: took 41.909274783s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1017 21:17:48.445850  827198 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:17:48.449135  827198 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-332023" cluster and "default" namespace by default
	I1017 21:17:45.974439  830770 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1017 21:17:46.193541  830770 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1017 21:17:47.888679  830770 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1017 21:17:48.619292  830770 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1017 21:17:49.045086  830770 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1017 21:17:49.045679  830770 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1017 21:17:49.048322  830770 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1017 21:17:49.051470  830770 out.go:252]   - Booting up control plane ...
	I1017 21:17:49.051575  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1017 21:17:49.051661  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1017 21:17:49.052583  830770 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1017 21:17:49.068158  830770 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1017 21:17:49.069121  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1017 21:17:49.077471  830770 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1017 21:17:49.078223  830770 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1017 21:17:49.078478  830770 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1017 21:17:49.219544  830770 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1017 21:17:49.219687  830770 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1017 21:17:50.723305  830770 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.501459653s
	I1017 21:17:50.723755  830770 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1017 21:17:50.724084  830770 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1017 21:17:50.724217  830770 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1017 21:17:50.724309  830770 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1017 21:17:52.994819  830770 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.270025456s
	I1017 21:17:54.905875  830770 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 4.182022258s
	I1017 21:17:56.725976  830770 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 6.001885509s
	I1017 21:17:56.747203  830770 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1017 21:17:56.762458  830770 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1017 21:17:56.776965  830770 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1017 21:17:56.777180  830770 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-229231 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1017 21:17:56.788814  830770 kubeadm.go:318] [bootstrap-token] Using token: dfhkce.8y881vui82au3otr
	I1017 21:17:56.793729  830770 out.go:252]   - Configuring RBAC rules ...
	I1017 21:17:56.793864  830770 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1017 21:17:56.795886  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1017 21:17:56.803738  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1017 21:17:56.807710  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1017 21:17:56.811280  830770 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1017 21:17:56.817302  830770 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1017 21:17:57.133021  830770 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1017 21:17:57.620314  830770 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1017 21:17:58.133883  830770 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1017 21:17:58.135079  830770 kubeadm.go:318] 
	I1017 21:17:58.135223  830770 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1017 21:17:58.135249  830770 kubeadm.go:318] 
	I1017 21:17:58.135376  830770 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1017 21:17:58.135391  830770 kubeadm.go:318] 
	I1017 21:17:58.135426  830770 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1017 21:17:58.135513  830770 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1017 21:17:58.135578  830770 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1017 21:17:58.135609  830770 kubeadm.go:318] 
	I1017 21:17:58.135673  830770 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1017 21:17:58.135682  830770 kubeadm.go:318] 
	I1017 21:17:58.135736  830770 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1017 21:17:58.135745  830770 kubeadm.go:318] 
	I1017 21:17:58.135810  830770 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1017 21:17:58.135913  830770 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1017 21:17:58.136012  830770 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1017 21:17:58.136022  830770 kubeadm.go:318] 
	I1017 21:17:58.136140  830770 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1017 21:17:58.136263  830770 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1017 21:17:58.136273  830770 kubeadm.go:318] 
	I1017 21:17:58.136379  830770 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136500  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be \
	I1017 21:17:58.136527  830770 kubeadm.go:318] 	--control-plane 
	I1017 21:17:58.136535  830770 kubeadm.go:318] 
	I1017 21:17:58.136643  830770 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1017 21:17:58.136657  830770 kubeadm.go:318] 
	I1017 21:17:58.136764  830770 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token dfhkce.8y881vui82au3otr \
	I1017 21:17:58.136882  830770 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:c23b1c89b0b115a411da4dd6272472c68b0d55f96a7c82c560456f258a4fc9be 
	I1017 21:17:58.140499  830770 kubeadm.go:318] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1017 21:17:58.140743  830770 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1017 21:17:58.140859  830770 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1017 21:17:58.140880  830770 cni.go:84] Creating CNI manager for ""
	I1017 21:17:58.140888  830770 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:17:58.145921  830770 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1017 21:17:58.148955  830770 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1017 21:17:58.153324  830770 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1017 21:17:58.153351  830770 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1017 21:17:58.167635  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1017 21:17:58.518826  830770 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1017 21:17:58.518991  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:58.519168  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-229231 minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0 minikube.k8s.io/name=newest-cni-229231 minikube.k8s.io/primary=true
	I1017 21:17:58.757619  830770 ops.go:34] apiserver oom_adj: -16
	I1017 21:17:58.757777  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.258478  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:17:59.758332  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.259233  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:00.758014  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.258315  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:01.758342  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.258499  830770 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1017 21:18:02.408060  830770 kubeadm.go:1113] duration metric: took 3.889134071s to wait for elevateKubeSystemPrivileges
	I1017 21:18:02.408087  830770 kubeadm.go:402] duration metric: took 22.173501106s to StartCluster
	I1017 21:18:02.408103  830770 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.408166  830770 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:02.409084  830770 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:02.409291  830770 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:18:02.409441  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1017 21:18:02.409691  830770 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:02.409721  830770 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:18:02.409781  830770 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229231"
	I1017 21:18:02.409794  830770 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-229231"
	I1017 21:18:02.409817  830770 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:02.410337  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.410858  830770 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229231"
	I1017 21:18:02.410878  830770 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229231"
	I1017 21:18:02.411151  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.415302  830770 out.go:179] * Verifying Kubernetes components...
	I1017 21:18:02.419994  830770 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:02.446892  830770 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:18:02.449947  830770 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:02.449968  830770 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:18:02.450036  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:02.460628  830770 addons.go:238] Setting addon default-storageclass=true in "newest-cni-229231"
	I1017 21:18:02.460665  830770 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:02.461161  830770 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:02.496741  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:02.503299  830770 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:02.503318  830770 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:18:02.503375  830770 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:02.531519  830770 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33864 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:02.794332  830770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:02.889822  830770 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:02.891523  830770 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1017 21:18:02.891630  830770 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:04.148640  830770 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.256989443s)
	I1017 21:18:04.150180  830770 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:18:04.150231  830770 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:18:04.150463  830770 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.258918501s)
	I1017 21:18:04.150481  830770 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1017 21:18:04.151214  830770 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261365415s)
	I1017 21:18:04.154639  830770 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1017 21:18:04.159456  830770 addons.go:514] duration metric: took 1.749719902s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1017 21:18:04.193245  830770 api_server.go:72] duration metric: took 1.783925587s to wait for apiserver process to appear ...
	I1017 21:18:04.193274  830770 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:18:04.193296  830770 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:04.209580  830770 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:18:04.217350  830770 api_server.go:141] control plane version: v1.34.1
	I1017 21:18:04.217383  830770 api_server.go:131] duration metric: took 24.10087ms to wait for apiserver health ...
	I1017 21:18:04.217393  830770 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:18:04.237242  830770 system_pods.go:59] 8 kube-system pods found
	I1017 21:18:04.237274  830770 system_pods.go:61] "coredns-66bc5c9577-zsbw9" [ab5b72a4-6a5d-4f98-9f27-a6b79f1c56cf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:04.237283  830770 system_pods.go:61] "etcd-newest-cni-229231" [1972c4be-a973-41cd-a7db-f940c7bfedcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:18:04.237291  830770 system_pods.go:61] "kindnet-lwztk" [1ce01431-d96e-4be0-aee9-f5172d35f7a0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1017 21:18:04.237296  830770 system_pods.go:61] "kube-apiserver-newest-cni-229231" [bc06de01-5287-4d5d-9c16-8917e6f62b6c] Running
	I1017 21:18:04.237303  830770 system_pods.go:61] "kube-controller-manager-newest-cni-229231" [62b40139-100e-4c66-827d-de841c45bc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:18:04.237309  830770 system_pods.go:61] "kube-proxy-ws4mh" [66800a1d-51bc-41d0-9811-463a149fc9cd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1017 21:18:04.237320  830770 system_pods.go:61] "kube-scheduler-newest-cni-229231" [2b082865-cbcb-428b-b44b-77e744c7e89b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:18:04.237327  830770 system_pods.go:61] "storage-provisioner" [8a3d6e07-be6d-445a-b6af-7ef77edb6905] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:04.237332  830770 system_pods.go:74] duration metric: took 19.93441ms to wait for pod list to return data ...
	I1017 21:18:04.237341  830770 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:18:04.254560  830770 default_sa.go:45] found service account: "default"
	I1017 21:18:04.254593  830770 default_sa.go:55] duration metric: took 17.245983ms for default service account to be created ...
	I1017 21:18:04.254606  830770 kubeadm.go:586] duration metric: took 1.84529353s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:04.254624  830770 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:18:04.271224  830770 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:18:04.271260  830770 node_conditions.go:123] node cpu capacity is 2
	I1017 21:18:04.271272  830770 node_conditions.go:105] duration metric: took 16.642694ms to run NodePressure ...
	I1017 21:18:04.271285  830770 start.go:241] waiting for startup goroutines ...
	I1017 21:18:04.654399  830770 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-229231" context rescaled to 1 replicas
	I1017 21:18:04.654433  830770 start.go:246] waiting for cluster config update ...
	I1017 21:18:04.654447  830770 start.go:255] writing updated cluster config ...
	I1017 21:18:04.654749  830770 ssh_runner.go:195] Run: rm -f paused
	I1017 21:18:04.738789  830770 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:18:04.741993  830770 out.go:179] * Done! kubectl is now configured to use "newest-cni-229231" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.575399002Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.579667362Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=314c2a79-c401-4224-8aa3-b889dac7f84c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.585788513Z" level=info msg="Ran pod sandbox dfeec432512a701c2ef9e5aec96025a4f62796267ae88f88957055f8e0a53d38 with infra container: kube-system/kindnet-lwztk/POD" id=314c2a79-c401-4224-8aa3-b889dac7f84c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.587686711Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=52d4aca3-c295-4048-a613-d5dbde2d6cae name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.589604815Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2f69444f-8de2-44eb-9daf-d5ca53eae8a0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.598309613Z" level=info msg="Creating container: kube-system/kindnet-lwztk/kindnet-cni" id=ee2ad951-095b-464a-9b32-e399c9a8ebf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.598721318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.607363732Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-ws4mh/POD" id=df5576bc-fec1-4d83-bac5-3ddd4af6a19b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.608102219Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.622062351Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df5576bc-fec1-4d83-bac5-3ddd4af6a19b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.625266001Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.632734894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.640733919Z" level=info msg="Ran pod sandbox bb6b4c60515c869bff5bb280087f396dac3ebe1d97d89e7d82ebbdc52d37401d with infra container: kube-system/kube-proxy-ws4mh/POD" id=df5576bc-fec1-4d83-bac5-3ddd4af6a19b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.65359259Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=2f8b76fa-08ed-40cf-ab9d-3792f6709ac9 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.663315246Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=84db2e61-6dae-4755-8507-16750032d399 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.679546914Z" level=info msg="Creating container: kube-system/kube-proxy-ws4mh/kube-proxy" id=80e85047-1b37-46c5-8352-4c5cdde94db1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.680024836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.711734794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.721836564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.756232996Z" level=info msg="Created container 16924b7511fbb7a8e8ab2e3acfdb072cf8949ee87e322b17aafd097f6bc88a76: kube-system/kindnet-lwztk/kindnet-cni" id=ee2ad951-095b-464a-9b32-e399c9a8ebf0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.760045934Z" level=info msg="Starting container: 16924b7511fbb7a8e8ab2e3acfdb072cf8949ee87e322b17aafd097f6bc88a76" id=66c9eb7b-3d89-4aa9-b1f2-bcb901e6f584 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.78747798Z" level=info msg="Started container" PID=1487 containerID=16924b7511fbb7a8e8ab2e3acfdb072cf8949ee87e322b17aafd097f6bc88a76 description=kube-system/kindnet-lwztk/kindnet-cni id=66c9eb7b-3d89-4aa9-b1f2-bcb901e6f584 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dfeec432512a701c2ef9e5aec96025a4f62796267ae88f88957055f8e0a53d38
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.802327978Z" level=info msg="Created container 12eaddc78ac0beba1b6d1c2653c6e657f7bc2bf47d38650a992b13fe0f49571e: kube-system/kube-proxy-ws4mh/kube-proxy" id=80e85047-1b37-46c5-8352-4c5cdde94db1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.804798737Z" level=info msg="Starting container: 12eaddc78ac0beba1b6d1c2653c6e657f7bc2bf47d38650a992b13fe0f49571e" id=a58cdd0d-f374-4646-b94a-007c0571093b name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:03 newest-cni-229231 crio[836]: time="2025-10-17T21:18:03.812535908Z" level=info msg="Started container" PID=1493 containerID=12eaddc78ac0beba1b6d1c2653c6e657f7bc2bf47d38650a992b13fe0f49571e description=kube-system/kube-proxy-ws4mh/kube-proxy id=a58cdd0d-f374-4646-b94a-007c0571093b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb6b4c60515c869bff5bb280087f396dac3ebe1d97d89e7d82ebbdc52d37401d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	12eaddc78ac0b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   2 seconds ago       Running             kube-proxy                0                   bb6b4c60515c8       kube-proxy-ws4mh                            kube-system
	16924b7511fbb       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 seconds ago       Running             kindnet-cni               0                   dfeec432512a7       kindnet-lwztk                               kube-system
	3ea9a2f5e7597       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   aca703e1f4725       kube-scheduler-newest-cni-229231            kube-system
	e8764b6a445dd       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   206f85b37d632       kube-apiserver-newest-cni-229231            kube-system
	f233a67d8be4d       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   9a1358cacdb23       kube-controller-manager-newest-cni-229231   kube-system
	fd7a67ac4dc02       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   b1eca97e8cb32       etcd-newest-cni-229231                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-229231
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-229231
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-229231
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:17:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-229231
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:17:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:17:57 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:17:57 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:17:57 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 21:17:57 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-229231
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4099547-187b-4e47-bfa5-074f4f8fb46b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-229231                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9s
	  kube-system                 kindnet-lwztk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4s
	  kube-system                 kube-apiserver-newest-cni-229231             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-229231    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-ws4mh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-newest-cni-229231             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2s                 kube-proxy       
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  16s (x8 over 16s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16s (x8 over 16s)  kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16s (x8 over 16s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   Starting                 9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9s                 kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9s                 kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9s                 kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-229231 event: Registered Node newest-cni-229231 in Controller
	
	
	==> dmesg <==
	[Oct17 20:53] overlayfs: idmapped layers are currently not supported
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	[Oct17 21:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [fd7a67ac4dc0263d5033cd0094c142b648f0a5babc8e187f03e1dcd4bf61261d] <==
	{"level":"warn","ts":"2025-10-17T21:17:53.594943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.610762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.628502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.650903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.666042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.688588Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.702961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.716040Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.733252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.749999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.765272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.795031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.807613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.824275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.843322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.868799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.879663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.895253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.909839Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.930228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.952991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.986528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:53.992500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:54.006456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:17:54.073060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55004","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:06 up  4:00,  0 user,  load average: 3.71, 3.65, 3.25
	Linux newest-cni-229231 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [16924b7511fbb7a8e8ab2e3acfdb072cf8949ee87e322b17aafd097f6bc88a76] <==
	I1017 21:18:03.827593       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:18:03.827860       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:18:03.828025       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:18:03.828037       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:18:03.828051       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:18:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:18:04.122608       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:18:04.122681       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:18:04.122713       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:18:04.123708       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e8764b6a445dd5891d425b58c853efc1618c6887d5df2b57c97e20c57a8ecb9c] <==
	I1017 21:17:54.937784       1 policy_source.go:240] refreshing policies
	I1017 21:17:54.948442       1 controller.go:667] quota admission added evaluator for: namespaces
	E1017 21:17:54.977059       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1017 21:17:55.049921       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:17:55.059140       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1017 21:17:55.080331       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:17:55.080407       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1017 21:17:55.182046       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:17:55.614950       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1017 21:17:55.622043       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1017 21:17:55.622071       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:17:56.446336       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:17:56.493569       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:17:56.638581       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1017 21:17:56.646827       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1017 21:17:56.648145       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:17:56.653131       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:17:56.870391       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:17:57.571052       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:17:57.619263       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1017 21:17:57.637070       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1017 21:18:02.786802       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:18:02.856458       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:18:02.894341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:18:02.904226       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [f233a67d8be4dbcbfa15da417f3b51062afe7d4fec6d3a85e68683fde2f76358] <==
	I1017 21:18:02.019952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 21:18:02.019964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 21:18:02.020506       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 21:18:02.020530       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 21:18:02.020540       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:18:02.020568       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1017 21:18:02.021051       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1017 21:18:02.021065       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:18:02.021079       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:18:02.027025       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1017 21:18:02.027197       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:18:02.053762       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1017 21:18:02.065237       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1017 21:18:02.066497       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1017 21:18:02.066542       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:18:02.066721       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1017 21:18:02.067374       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:18:02.068050       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1017 21:18:02.069263       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:18:02.071518       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1017 21:18:02.075008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:18:02.075009       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:18:02.078190       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:18:02.084481       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1017 21:18:02.091204       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [12eaddc78ac0beba1b6d1c2653c6e657f7bc2bf47d38650a992b13fe0f49571e] <==
	I1017 21:18:03.937753       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:18:04.161290       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:18:04.361402       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:18:04.361508       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:18:04.361619       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:18:04.383804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:18:04.383926       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:18:04.387922       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:18:04.388296       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:18:04.388525       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:18:04.389754       1 config.go:200] "Starting service config controller"
	I1017 21:18:04.390249       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:18:04.390325       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:18:04.390397       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:18:04.390447       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:18:04.390655       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:18:04.391589       1 config.go:309] "Starting node config controller"
	I1017 21:18:04.391607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:18:04.391624       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:18:04.490767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:18:04.490869       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:18:04.490892       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ea9a2f5e759721ce7ad4fc218c6e71cb1fbd607e813253734ffc6ab98751a88] <==
	E1017 21:17:54.925738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1017 21:17:54.925826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1017 21:17:54.934787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1017 21:17:54.934939       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:17:54.935037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:17:54.935144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1017 21:17:54.935244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1017 21:17:54.935334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 21:17:54.935798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:17:54.936001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1017 21:17:54.936042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1017 21:17:54.936115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:17:54.936211       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:17:54.936264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1017 21:17:54.936278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 21:17:55.779096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1017 21:17:55.840867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1017 21:17:55.913506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1017 21:17:55.944198       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1017 21:17:55.966011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1017 21:17:56.037575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1017 21:17:56.092472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1017 21:17:56.154104       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1017 21:17:56.294740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1017 21:17:58.592839       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:17:57 newest-cni-229231 kubelet[1307]: I1017 21:17:57.923830    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2df4261d8aec30c0803e28a9703c044a-ca-certs\") pod \"kube-controller-manager-newest-cni-229231\" (UID: \"2df4261d8aec30c0803e28a9703c044a\") " pod="kube-system/kube-controller-manager-newest-cni-229231"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.486194    1307 apiserver.go:52] "Watching apiserver"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.520558    1307 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.620278    1307 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.621467    1307 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: E1017 21:17:58.683475    1307 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-229231\" already exists" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: E1017 21:17:58.697995    1307 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-229231\" already exists" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.769418    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-229231" podStartSLOduration=1.76940048 podStartE2EDuration="1.76940048s" podCreationTimestamp="2025-10-17 21:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:17:58.71940013 +0000 UTC m=+1.316148498" watchObservedRunningTime="2025-10-17 21:17:58.76940048 +0000 UTC m=+1.366148840"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.769574    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-229231" podStartSLOduration=1.769567317 podStartE2EDuration="1.769567317s" podCreationTimestamp="2025-10-17 21:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:17:58.769086253 +0000 UTC m=+1.365834605" watchObservedRunningTime="2025-10-17 21:17:58.769567317 +0000 UTC m=+1.366315669"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.800375    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-229231" podStartSLOduration=1.800341747 podStartE2EDuration="1.800341747s" podCreationTimestamp="2025-10-17 21:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:17:58.787543647 +0000 UTC m=+1.384292007" watchObservedRunningTime="2025-10-17 21:17:58.800341747 +0000 UTC m=+1.397090107"
	Oct 17 21:17:58 newest-cni-229231 kubelet[1307]: I1017 21:17:58.816785    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-229231" podStartSLOduration=2.816765926 podStartE2EDuration="2.816765926s" podCreationTimestamp="2025-10-17 21:17:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:17:58.801339445 +0000 UTC m=+1.398087805" watchObservedRunningTime="2025-10-17 21:17:58.816765926 +0000 UTC m=+1.413514442"
	Oct 17 21:18:02 newest-cni-229231 kubelet[1307]: I1017 21:18:02.052758    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 21:18:02 newest-cni-229231 kubelet[1307]: I1017 21:18:02.053394    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.091730    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-xtables-lock\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.091875    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/66800a1d-51bc-41d0-9811-463a149fc9cd-kube-proxy\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.091902    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqg8\" (UniqueName: \"kubernetes.io/projected/1ce01431-d96e-4be0-aee9-f5172d35f7a0-kube-api-access-tvqg8\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.091923    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-lib-modules\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.092061    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82cjx\" (UniqueName: \"kubernetes.io/projected/66800a1d-51bc-41d0-9811-463a149fc9cd-kube-api-access-82cjx\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.092084    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-cni-cfg\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.092207    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-xtables-lock\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.092226    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-lib-modules\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: I1017 21:18:03.298009    1307 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:18:03 newest-cni-229231 kubelet[1307]: W1017 21:18:03.582303    1307 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/crio-dfeec432512a701c2ef9e5aec96025a4f62796267ae88f88957055f8e0a53d38 WatchSource:0}: Error finding container dfeec432512a701c2ef9e5aec96025a4f62796267ae88f88957055f8e0a53d38: Status 404 returned error can't find the container with id dfeec432512a701c2ef9e5aec96025a4f62796267ae88f88957055f8e0a53d38
	Oct 17 21:18:04 newest-cni-229231 kubelet[1307]: I1017 21:18:04.739242    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ws4mh" podStartSLOduration=2.739223997 podStartE2EDuration="2.739223997s" podCreationTimestamp="2025-10-17 21:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:18:04.738756332 +0000 UTC m=+7.335504692" watchObservedRunningTime="2025-10-17 21:18:04.739223997 +0000 UTC m=+7.335972464"
	Oct 17 21:18:04 newest-cni-229231 kubelet[1307]: I1017 21:18:04.967328    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-lwztk" podStartSLOduration=2.967309403 podStartE2EDuration="2.967309403s" podCreationTimestamp="2025-10-17 21:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-17 21:18:04.82819966 +0000 UTC m=+7.424948020" watchObservedRunningTime="2025-10-17 21:18:04.967309403 +0000 UTC m=+7.564057754"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229231 -n newest-cni-229231
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-229231 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zsbw9 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner: exit status 1 (112.89711ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zsbw9" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (3.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-229231 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-229231 --alsologtostderr -v=1: exit status 80 (1.99680525s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-229231 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 21:18:25.208025  837119 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:18:25.209265  837119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:25.209308  837119 out.go:374] Setting ErrFile to fd 2...
	I1017 21:18:25.209330  837119 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:25.209620  837119 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:18:25.209954  837119 out.go:368] Setting JSON to false
	I1017 21:18:25.210010  837119 mustload.go:65] Loading cluster: newest-cni-229231
	I1017 21:18:25.210461  837119 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:25.211309  837119 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:25.230240  837119 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:25.230543  837119 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:25.291241  837119 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 21:18:25.281342491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:25.291920  837119 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-229231 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1017 21:18:25.295459  837119 out.go:179] * Pausing node newest-cni-229231 ... 
	I1017 21:18:25.298389  837119 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:25.298728  837119 ssh_runner.go:195] Run: systemctl --version
	I1017 21:18:25.298779  837119 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:25.319249  837119 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:25.421692  837119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:25.434413  837119 pause.go:52] kubelet running: true
	I1017 21:18:25.434481  837119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:25.662297  837119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:25.662445  837119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:25.731771  837119 cri.go:89] found id: "17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a"
	I1017 21:18:25.731794  837119 cri.go:89] found id: "19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318"
	I1017 21:18:25.731799  837119 cri.go:89] found id: "f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594"
	I1017 21:18:25.731803  837119 cri.go:89] found id: "b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5"
	I1017 21:18:25.731807  837119 cri.go:89] found id: "3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab"
	I1017 21:18:25.731811  837119 cri.go:89] found id: "0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82"
	I1017 21:18:25.731814  837119 cri.go:89] found id: ""
	I1017 21:18:25.731870  837119 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:25.744058  837119 retry.go:31] will retry after 355.129342ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:25Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:26.099628  837119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:26.115139  837119 pause.go:52] kubelet running: false
	I1017 21:18:26.115210  837119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:26.257436  837119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:26.257510  837119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:26.336564  837119 cri.go:89] found id: "17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a"
	I1017 21:18:26.336585  837119 cri.go:89] found id: "19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318"
	I1017 21:18:26.336590  837119 cri.go:89] found id: "f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594"
	I1017 21:18:26.336593  837119 cri.go:89] found id: "b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5"
	I1017 21:18:26.336596  837119 cri.go:89] found id: "3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab"
	I1017 21:18:26.336611  837119 cri.go:89] found id: "0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82"
	I1017 21:18:26.336614  837119 cri.go:89] found id: ""
	I1017 21:18:26.336663  837119 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:26.347914  837119 retry.go:31] will retry after 536.668071ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:26Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:26.885758  837119 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 21:18:26.898668  837119 pause.go:52] kubelet running: false
	I1017 21:18:26.898767  837119 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1017 21:18:27.046455  837119 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1017 21:18:27.046565  837119 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1017 21:18:27.119068  837119 cri.go:89] found id: "17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a"
	I1017 21:18:27.119177  837119 cri.go:89] found id: "19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318"
	I1017 21:18:27.119194  837119 cri.go:89] found id: "f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594"
	I1017 21:18:27.119200  837119 cri.go:89] found id: "b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5"
	I1017 21:18:27.119203  837119 cri.go:89] found id: "3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab"
	I1017 21:18:27.119208  837119 cri.go:89] found id: "0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82"
	I1017 21:18:27.119211  837119 cri.go:89] found id: ""
	I1017 21:18:27.119270  837119 ssh_runner.go:195] Run: sudo runc list -f json
	I1017 21:18:27.133558  837119 out.go:203] 
	W1017 21:18:27.136462  837119 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1017 21:18:27.136487  837119 out.go:285] * 
	* 
	W1017 21:18:27.144669  837119 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1017 21:18:27.147725  837119 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-229231 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-229231
helpers_test.go:243: (dbg) docker inspect newest-cni-229231:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	        "Created": "2025-10-17T21:17:31.336902083Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 835267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:18:10.112942858Z",
	            "FinishedAt": "2025-10-17T21:18:08.964830446Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hosts",
	        "LogPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4-json.log",
	        "Name": "/newest-cni-229231",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-229231:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-229231",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	                "LowerDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-229231",
	                "Source": "/var/lib/docker/volumes/newest-cni-229231/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-229231",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-229231",
	                "name.minikube.sigs.k8s.io": "newest-cni-229231",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d687b5239424d80c61143f40c4dc6ea8f4218bf3cff6914a5e2ffa115cadccc",
	            "SandboxKey": "/var/run/docker/netns/9d687b523942",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-229231": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:72:e0:cc:04:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d7c3e6b3b2a4a2268a255e36474804d31f93559fc2897f501a551059144a9568",
	                    "EndpointID": "ad6530b66d76ae1bec7a8989065957b2b9e61f3ef6b3a8d0664193e88939dd0f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-229231",
	                        "b47c768b4eb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231
E1017 21:18:27.377892  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:27.384269  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:27.395698  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:27.417046  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:27.458398  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231: exit status 2 (338.288574ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25
E1017 21:18:27.540319  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:27.701572  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:18:28.022948  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25: (1.087086112s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                                                                                                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-332023 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-332023 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ stop    │ -p newest-cni-229231 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-332023                                                                                                                                                                                                               │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-229231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-332023                                                                                                                                                                                                               │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ newest-cni-229231 image list --format=json                                                                                                                                                                                                    │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p newest-cni-229231 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:18:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:18:09.683088  835071 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:18:09.683282  835071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:09.683295  835071 out.go:374] Setting ErrFile to fd 2...
	I1017 21:18:09.683301  835071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:09.683561  835071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:18:09.683927  835071 out.go:368] Setting JSON to false
	I1017 21:18:09.685750  835071 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14436,"bootTime":1760721454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:18:09.685855  835071 start.go:141] virtualization:  
	I1017 21:18:09.689802  835071 out.go:179] * [newest-cni-229231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:18:09.692843  835071 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:18:09.692995  835071 notify.go:220] Checking for updates...
	I1017 21:18:09.701576  835071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:18:09.704448  835071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:09.707438  835071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:18:09.710712  835071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:18:09.713844  835071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:18:09.717511  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:09.718289  835071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:18:09.771027  835071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:18:09.771157  835071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:09.894634  835071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-10-17 21:18:09.882351105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:09.894753  835071 docker.go:318] overlay module found
	I1017 21:18:09.898386  835071 out.go:179] * Using the docker driver based on existing profile
	I1017 21:18:09.901282  835071 start.go:305] selected driver: docker
	I1017 21:18:09.901301  835071 start.go:925] validating driver "docker" against &{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:09.901417  835071 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:18:09.902078  835071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:10.001282  835071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-10-17 21:18:09.9920263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:10.001673  835071 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:10.001718  835071 cni.go:84] Creating CNI manager for ""
	I1017 21:18:10.001780  835071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:18:10.001830  835071 start.go:349] cluster config:
	{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:10.005066  835071 out.go:179] * Starting "newest-cni-229231" primary control-plane node in "newest-cni-229231" cluster
	I1017 21:18:10.008194  835071 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:18:10.011300  835071 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:18:10.014307  835071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:18:10.014268  835071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:18:10.014389  835071 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:18:10.014400  835071 cache.go:58] Caching tarball of preloaded images
	I1017 21:18:10.014491  835071 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:18:10.014500  835071 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:18:10.014656  835071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:18:10.045452  835071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:18:10.045475  835071 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:18:10.045493  835071 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:18:10.045517  835071 start.go:360] acquireMachinesLock for newest-cni-229231: {Name:mk13ee1c4f50a5b33a03132c2a1b074ef28a6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:18:10.045577  835071 start.go:364] duration metric: took 40.525µs to acquireMachinesLock for "newest-cni-229231"
	I1017 21:18:10.045598  835071 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:18:10.045604  835071 fix.go:54] fixHost starting: 
	I1017 21:18:10.045887  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:10.068246  835071 fix.go:112] recreateIfNeeded on newest-cni-229231: state=Stopped err=<nil>
	W1017 21:18:10.068276  835071 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 21:18:10.072040  835071 out.go:252] * Restarting existing docker container for "newest-cni-229231" ...
	I1017 21:18:10.072149  835071 cli_runner.go:164] Run: docker start newest-cni-229231
	I1017 21:18:10.353459  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:10.378306  835071 kic.go:430] container "newest-cni-229231" state is running.
	I1017 21:18:10.378757  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:10.398921  835071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:18:10.399305  835071 machine.go:93] provisionDockerMachine start ...
	I1017 21:18:10.399372  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:10.429645  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:10.429961  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:10.429970  835071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:18:10.431225  835071 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56936->127.0.0.1:33869: read: connection reset by peer
	I1017 21:18:13.582761  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:18:13.582792  835071 ubuntu.go:182] provisioning hostname "newest-cni-229231"
	I1017 21:18:13.582872  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:13.600799  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:13.601126  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:13.601142  835071 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229231 && echo "newest-cni-229231" | sudo tee /etc/hostname
	I1017 21:18:13.760450  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:18:13.760543  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:13.778118  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:13.778428  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:13.778445  835071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229231/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:18:13.927507  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:18:13.927534  835071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:18:13.927561  835071 ubuntu.go:190] setting up certificates
	I1017 21:18:13.927571  835071 provision.go:84] configureAuth start
	I1017 21:18:13.927635  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:13.946241  835071 provision.go:143] copyHostCerts
	I1017 21:18:13.946311  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:18:13.946332  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:18:13.946411  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:18:13.946517  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:18:13.946529  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:18:13.946554  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:18:13.946622  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:18:13.946632  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:18:13.946656  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:18:13.946706  835071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229231 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-229231]
	I1017 21:18:15.202347  835071 provision.go:177] copyRemoteCerts
	I1017 21:18:15.202415  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:18:15.202458  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.219649  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.323018  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:18:15.341933  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1017 21:18:15.359661  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:18:15.376675  835071 provision.go:87] duration metric: took 1.449075798s to configureAuth
	I1017 21:18:15.376705  835071 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:18:15.376890  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:15.376999  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.394038  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:15.394352  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:15.394372  835071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:18:15.683265  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:18:15.683290  835071 machine.go:96] duration metric: took 5.283969937s to provisionDockerMachine
	I1017 21:18:15.683302  835071 start.go:293] postStartSetup for "newest-cni-229231" (driver="docker")
	I1017 21:18:15.683313  835071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:18:15.683375  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:18:15.683420  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.700501  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.803351  835071 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:18:15.806687  835071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:18:15.806714  835071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:18:15.806725  835071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:18:15.806776  835071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:18:15.806860  835071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:18:15.806957  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:18:15.814275  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:18:15.832196  835071 start.go:296] duration metric: took 148.879057ms for postStartSetup
	I1017 21:18:15.832289  835071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:18:15.832335  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.849401  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.947917  835071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:18:15.952685  835071 fix.go:56] duration metric: took 5.907073888s for fixHost
	I1017 21:18:15.952706  835071 start.go:83] releasing machines lock for "newest-cni-229231", held for 5.907121101s
	I1017 21:18:15.952772  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:15.970072  835071 ssh_runner.go:195] Run: cat /version.json
	I1017 21:18:15.970141  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.970150  835071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:18:15.970204  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.995301  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:16.000754  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:16.099260  835071 ssh_runner.go:195] Run: systemctl --version
	I1017 21:18:16.194019  835071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:18:16.230747  835071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:18:16.235786  835071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:18:16.235917  835071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:18:16.244140  835071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:18:16.244166  835071 start.go:495] detecting cgroup driver to use...
	I1017 21:18:16.244230  835071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:18:16.244293  835071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:18:16.259633  835071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:18:16.272748  835071 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:18:16.272810  835071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:18:16.288371  835071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:18:16.301601  835071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:18:16.409054  835071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:18:16.526124  835071 docker.go:234] disabling docker service ...
	I1017 21:18:16.526224  835071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:18:16.542534  835071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:18:16.555769  835071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:18:16.669507  835071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:18:16.782898  835071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:18:16.795661  835071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:18:16.809972  835071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:18:16.810069  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.818995  835071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:18:16.819093  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.828235  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.837217  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.846151  835071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:18:16.854518  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.863549  835071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.873307  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.882358  835071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:18:16.890026  835071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:18:16.897695  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:17.011639  835071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:18:17.156380  835071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:18:17.156468  835071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:18:17.160428  835071 start.go:563] Will wait 60s for crictl version
	I1017 21:18:17.160499  835071 ssh_runner.go:195] Run: which crictl
	I1017 21:18:17.164126  835071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:18:17.188927  835071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:18:17.189027  835071 ssh_runner.go:195] Run: crio --version
	I1017 21:18:17.218905  835071 ssh_runner.go:195] Run: crio --version
	I1017 21:18:17.252967  835071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:18:17.255914  835071 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:18:17.270038  835071 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:18:17.273988  835071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:18:17.286791  835071 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 21:18:17.289661  835071 kubeadm.go:883] updating cluster {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:18:17.289796  835071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:18:17.289876  835071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:18:17.321989  835071 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:18:17.322014  835071 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:18:17.322073  835071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:18:17.348643  835071 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:18:17.348668  835071 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:18:17.348675  835071 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:18:17.348786  835071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-229231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:18:17.348871  835071 ssh_runner.go:195] Run: crio config
	I1017 21:18:17.421946  835071 cni.go:84] Creating CNI manager for ""
	I1017 21:18:17.421973  835071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:18:17.421990  835071 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 21:18:17.422047  835071 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229231 NodeName:newest-cni-229231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:18:17.422217  835071 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229231"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:18:17.422306  835071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:18:17.430035  835071 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:18:17.430151  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:18:17.437591  835071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:18:17.449887  835071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:18:17.461948  835071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 21:18:17.474446  835071 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:18:17.478087  835071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:18:17.490763  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:17.613111  835071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:17.628792  835071 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231 for IP: 192.168.76.2
	I1017 21:18:17.628814  835071 certs.go:195] generating shared ca certs ...
	I1017 21:18:17.628832  835071 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:17.629049  835071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:18:17.629115  835071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:18:17.629129  835071 certs.go:257] generating profile certs ...
	I1017 21:18:17.629235  835071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key
	I1017 21:18:17.629323  835071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c
	I1017 21:18:17.629385  835071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key
	I1017 21:18:17.629534  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:18:17.629588  835071 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:18:17.629600  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:18:17.629627  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:18:17.629671  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:18:17.629702  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:18:17.629766  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:18:17.630393  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:18:17.652656  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:18:17.670641  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:18:17.688391  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:18:17.708401  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:18:17.744932  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:18:17.765572  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:18:17.791919  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:18:17.812063  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:18:17.833593  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:18:17.853713  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:18:17.872941  835071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:18:17.886328  835071 ssh_runner.go:195] Run: openssl version
	I1017 21:18:17.892404  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:18:17.900660  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.904221  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.904321  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.950595  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:18:17.958432  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:18:17.966291  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:17.970087  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:17.970153  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:18.011428  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:18:18.020176  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:18:18.029391  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.033328  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.033447  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.076482  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:18:18.084836  835071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:18:18.088857  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:18:18.129952  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:18:18.171403  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:18:18.212316  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:18:18.256034  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:18:18.306794  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:18:18.356770  835071 kubeadm.go:400] StartCluster: {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:18.356864  835071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:18:18.356984  835071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:18:18.440978  835071 cri.go:89] found id: "f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594"
	I1017 21:18:18.441001  835071 cri.go:89] found id: "b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5"
	I1017 21:18:18.441005  835071 cri.go:89] found id: "3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab"
	I1017 21:18:18.441009  835071 cri.go:89] found id: "0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82"
	I1017 21:18:18.441012  835071 cri.go:89] found id: ""
	I1017 21:18:18.441092  835071 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:18:18.467663  835071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:18.467784  835071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:18:18.486868  835071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:18:18.486891  835071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:18:18.486980  835071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:18:18.499759  835071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:18:18.500206  835071 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229231" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:18.500342  835071 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229231" cluster setting kubeconfig missing "newest-cni-229231" context setting]
	I1017 21:18:18.500653  835071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.502976  835071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:18:18.513961  835071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 21:18:18.513998  835071 kubeadm.go:601] duration metric: took 27.100646ms to restartPrimaryControlPlane
	I1017 21:18:18.514007  835071 kubeadm.go:402] duration metric: took 157.248919ms to StartCluster
	I1017 21:18:18.514043  835071 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.514126  835071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:18.514758  835071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.515004  835071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:18:18.515326  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:18.515482  835071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:18:18.515591  835071 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229231"
	I1017 21:18:18.515611  835071 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-229231"
	W1017 21:18:18.515617  835071 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:18:18.515617  835071 addons.go:69] Setting dashboard=true in profile "newest-cni-229231"
	I1017 21:18:18.515640  835071 addons.go:238] Setting addon dashboard=true in "newest-cni-229231"
	W1017 21:18:18.515648  835071 addons.go:247] addon dashboard should already be in state true
	I1017 21:18:18.515639  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.515673  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.516104  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.516405  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.516963  835071 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229231"
	I1017 21:18:18.516985  835071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229231"
	I1017 21:18:18.517257  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.520393  835071 out.go:179] * Verifying Kubernetes components...
	I1017 21:18:18.523655  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:18.577440  835071 addons.go:238] Setting addon default-storageclass=true in "newest-cni-229231"
	W1017 21:18:18.577464  835071 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:18:18.577488  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.577926  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.581800  835071 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:18:18.585090  835071 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:18:18.585231  835071 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:18.585242  835071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:18:18.585309  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.594374  835071 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 21:18:18.602053  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:18:18.602082  835071 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:18:18.602154  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.640959  835071 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:18.640983  835071 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:18:18.640981  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.641047  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.663376  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.681589  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.804188  835071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:18.847906  835071 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:18:18.848063  835071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:18:18.864637  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:18.902834  835071 api_server.go:72] duration metric: took 387.792036ms to wait for apiserver process to appear ...
	I1017 21:18:18.902887  835071 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:18:18.902909  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:18.949586  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:18:18.949610  835071 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:18:18.989411  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:19.009738  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:18:19.009775  835071 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:18:19.065705  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:18:19.065732  835071 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:18:19.100959  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:18:19.100984  835071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:18:19.170662  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:18:19.170687  835071 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:18:19.197995  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:18:19.198031  835071 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:18:19.215850  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:18:19.215875  835071 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:18:19.231573  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:18:19.231598  835071 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:18:19.244778  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:18:19.244816  835071 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:18:19.257451  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:18:22.415806  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 21:18:22.415832  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 21:18:22.415846  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:22.544604  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 21:18:22.544629  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 21:18:22.903216  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:22.936299  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:22.936380  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.403618  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:23.420769  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:23.420857  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.903254  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:23.915896  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:23.915922  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.982699  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.117981493s)
	I1017 21:18:23.982756  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.99332416s)
	I1017 21:18:23.983221  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.725735034s)
	I1017 21:18:23.986362  835071 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229231 addons enable metrics-server
	
	I1017 21:18:24.006681  835071 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 21:18:24.009674  835071 addons.go:514] duration metric: took 5.494174772s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 21:18:24.403011  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:24.411065  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:18:24.412146  835071 api_server.go:141] control plane version: v1.34.1
	I1017 21:18:24.412169  835071 api_server.go:131] duration metric: took 5.509273156s to wait for apiserver health ...
	I1017 21:18:24.412178  835071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:18:24.415721  835071 system_pods.go:59] 8 kube-system pods found
	I1017 21:18:24.415757  835071 system_pods.go:61] "coredns-66bc5c9577-zsbw9" [ab5b72a4-6a5d-4f98-9f27-a6b79f1c56cf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:24.415767  835071 system_pods.go:61] "etcd-newest-cni-229231" [1972c4be-a973-41cd-a7db-f940c7bfedcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:18:24.415773  835071 system_pods.go:61] "kindnet-lwztk" [1ce01431-d96e-4be0-aee9-f5172d35f7a0] Running
	I1017 21:18:24.415782  835071 system_pods.go:61] "kube-apiserver-newest-cni-229231" [bc06de01-5287-4d5d-9c16-8917e6f62b6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:18:24.415788  835071 system_pods.go:61] "kube-controller-manager-newest-cni-229231" [62b40139-100e-4c66-827d-de841c45bc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:18:24.415799  835071 system_pods.go:61] "kube-proxy-ws4mh" [66800a1d-51bc-41d0-9811-463a149fc9cd] Running
	I1017 21:18:24.415809  835071 system_pods.go:61] "kube-scheduler-newest-cni-229231" [2b082865-cbcb-428b-b44b-77e744c7e89b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:18:24.415815  835071 system_pods.go:61] "storage-provisioner" [8a3d6e07-be6d-445a-b6af-7ef77edb6905] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:24.415824  835071 system_pods.go:74] duration metric: took 3.640235ms to wait for pod list to return data ...
	I1017 21:18:24.415835  835071 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:18:24.418567  835071 default_sa.go:45] found service account: "default"
	I1017 21:18:24.418591  835071 default_sa.go:55] duration metric: took 2.747431ms for default service account to be created ...
	I1017 21:18:24.418604  835071 kubeadm.go:586] duration metric: took 5.903568379s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:24.418620  835071 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:18:24.421341  835071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:18:24.421376  835071 node_conditions.go:123] node cpu capacity is 2
	I1017 21:18:24.421388  835071 node_conditions.go:105] duration metric: took 2.762234ms to run NodePressure ...
	I1017 21:18:24.421400  835071 start.go:241] waiting for startup goroutines ...
	I1017 21:18:24.421407  835071 start.go:246] waiting for cluster config update ...
	I1017 21:18:24.421418  835071 start.go:255] writing updated cluster config ...
	I1017 21:18:24.421713  835071 ssh_runner.go:195] Run: rm -f paused
	I1017 21:18:24.479242  835071 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:18:24.482580  835071 out.go:179] * Done! kubectl is now configured to use "newest-cni-229231" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.044064988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.047793371Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd503ea1-0062-4b9b-b227-5e768ad97577 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.053742393Z" level=info msg="Ran pod sandbox 1e8cae85f7d560c594bff08ef654496b5c0fdc493fc6667022dbc31835369b4f with infra container: kube-system/kindnet-lwztk/POD" id=cd503ea1-0062-4b9b-b227-5e768ad97577 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.055671032Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0e5aae20-620e-43c0-ad94-265853a8bfb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.059334996Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=83138513-ff82-413c-a5d9-7c72e604ab1d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.063445701Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-ws4mh/POD" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.063510227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.066049631Z" level=info msg="Creating container: kube-system/kindnet-lwztk/kindnet-cni" id=5b8505e7-5a3d-451c-834f-6576ad368da2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.074959141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.075039323Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.084440794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.093675978Z" level=info msg="Ran pod sandbox f9766ad208a98b05b63e91ef199337b10e7f4de6d8e851891a80ffb3fb103c3e with infra container: kube-system/kube-proxy-ws4mh/POD" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.095357911Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e1846cdb-3329-49f3-bfea-4e718ff61bb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.102029535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.10567246Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e05dc629-f8d1-4412-830b-652a80e545a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.106972366Z" level=info msg="Creating container: kube-system/kube-proxy-ws4mh/kube-proxy" id=6fb3af88-3dd4-4aa0-81c6-121df5bfee94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.107390291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.131537325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.133149948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.133817246Z" level=info msg="Created container 19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318: kube-system/kindnet-lwztk/kindnet-cni" id=5b8505e7-5a3d-451c-834f-6576ad368da2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.134450682Z" level=info msg="Starting container: 19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318" id=e7fe7902-cf9a-40ae-9637-e84a50914285 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.138632921Z" level=info msg="Started container" PID=1056 containerID=19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318 description=kube-system/kindnet-lwztk/kindnet-cni id=e7fe7902-cf9a-40ae-9637-e84a50914285 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e8cae85f7d560c594bff08ef654496b5c0fdc493fc6667022dbc31835369b4f
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.203208906Z" level=info msg="Created container 17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a: kube-system/kube-proxy-ws4mh/kube-proxy" id=6fb3af88-3dd4-4aa0-81c6-121df5bfee94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.205898441Z" level=info msg="Starting container: 17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a" id=410d5278-0e14-4941-a4d0-77c4c32f9474 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.20930239Z" level=info msg="Started container" PID=1067 containerID=17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a description=kube-system/kube-proxy-ws4mh/kube-proxy id=410d5278-0e14-4941-a4d0-77c4c32f9474 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9766ad208a98b05b63e91ef199337b10e7f4de6d8e851891a80ffb3fb103c3e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	17e82b9eed82a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   5 seconds ago       Running             kube-proxy                1                   f9766ad208a98       kube-proxy-ws4mh                            kube-system
	19ab6e5263c20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   5 seconds ago       Running             kindnet-cni               1                   1e8cae85f7d56       kindnet-lwztk                               kube-system
	f48d3a5ef287a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   9 seconds ago       Running             kube-controller-manager   1                   4d26f408aa4b8       kube-controller-manager-newest-cni-229231   kube-system
	b71d84c8ecd33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   9 seconds ago       Running             kube-apiserver            1                   751e4e81f0046       kube-apiserver-newest-cni-229231            kube-system
	3ea8f70e077ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   9 seconds ago       Running             kube-scheduler            1                   3203a067c0120       kube-scheduler-newest-cni-229231            kube-system
	0b7d129df455b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   9 seconds ago       Running             etcd                      1                   832bc3b1d0917       etcd-newest-cni-229231                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-229231
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-229231
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-229231
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:17:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-229231
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:18:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-229231
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4099547-187b-4e47-bfa5-074f4f8fb46b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-229231                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         31s
	  kube-system                 kindnet-lwztk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-newest-cni-229231             250m (12%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-229231    200m (10%)    0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-ws4mh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-newest-cni-229231             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 23s                kube-proxy       
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     31s                kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 31s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  31s                kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    31s                kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 31s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           26s                node-controller  Node newest-cni-229231 event: Registered Node newest-cni-229231 in Controller
	  Normal   Starting                 11s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2s                 node-controller  Node newest-cni-229231 event: Registered Node newest-cni-229231 in Controller
	
	
	==> dmesg <==
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	[Oct17 21:17] overlayfs: idmapped layers are currently not supported
	[Oct17 21:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82] <==
	{"level":"warn","ts":"2025-10-17T21:18:21.021844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.052375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.087075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.115534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.191671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.201299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.222470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.244682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.252085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.275297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.297149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.322411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.336596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.351333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.367892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.382973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.401011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.424337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.434273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.456210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.469645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.502323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.525434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.558459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.619163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:28 up  4:00,  0 user,  load average: 4.28, 3.77, 3.30
	Linux newest-cni-229231 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318] <==
	I1017 21:18:23.230104       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:18:23.230371       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:18:23.230484       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:18:23.230496       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:18:23.230506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:18:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:18:23.429715       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:18:23.429750       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:18:23.429759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:18:23.430067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5] <==
	I1017 21:18:22.673729       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:18:22.682097       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:18:22.685648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:18:22.688610       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:18:22.688737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:18:22.688804       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:18:22.689197       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:18:22.689363       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:18:22.689370       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:18:22.700208       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:18:22.719887       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1017 21:18:22.725077       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:18:22.819131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:18:23.391709       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:18:23.630044       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:18:23.714889       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:18:23.776997       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:18:23.795592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:18:23.901580       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.229.156"}
	I1017 21:18:23.954930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.180.252"}
	I1017 21:18:26.022986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:18:26.470542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:18:26.470542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:18:26.579475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:18:26.722539       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594] <==
	I1017 21:18:26.023806       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:18:26.025404       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:18:26.026666       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:18:26.030349       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:18:26.030591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 21:18:26.031863       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 21:18:26.031964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 21:18:26.032029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 21:18:26.035849       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:18:26.035950       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:18:26.037974       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:18:26.041357       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:18:26.045436       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:18:26.046823       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 21:18:26.057432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:18:26.057537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:18:26.057575       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:18:26.064533       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:18:26.064742       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 21:18:26.065347       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:18:26.067022       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:18:26.067321       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:18:26.074387       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:18:26.081411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:18:26.584948       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a] <==
	I1017 21:18:23.326445       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:18:23.493496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:18:23.595685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:18:23.600954       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:18:23.601060       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:18:23.726606       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:18:23.726732       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:18:23.731491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:18:23.734717       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:18:23.734820       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:18:23.736395       1 config.go:200] "Starting service config controller"
	I1017 21:18:23.736496       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:18:23.736557       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:18:23.736597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:18:23.736635       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:18:23.736681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:18:23.737350       1 config.go:309] "Starting node config controller"
	I1017 21:18:23.737428       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:18:23.737459       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:18:23.842764       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:18:23.863534       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:18:23.875987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab] <==
	I1017 21:18:20.555223       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:18:22.972099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:18:22.972137       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:18:22.988603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:18:22.988726       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:18:22.988747       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:18:22.988778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:18:22.991080       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:18:22.991094       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:18:22.995202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:22.995244       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:23.088990       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:18:23.096407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:23.099595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:18:19 newest-cni-229231 kubelet[726]: E1017 21:18:19.897896     726 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-229231\" not found" node="newest-cni-229231"
	Oct 17 21:18:20 newest-cni-229231 kubelet[726]: E1017 21:18:20.898774     726 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-229231\" not found" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.441866     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.673210     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-229231\" already exists" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.673253     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.709807     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-229231\" already exists" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.709843     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.732914     726 apiserver.go:52] "Watching apiserver"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.751262     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.753785     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-229231\" already exists" pod="kube-system/kube-apiserver-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.753811     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763820     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763924     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763970     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.767794     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.784225     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-229231\" already exists" pod="kube-system/kube-controller-manager-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784290     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-cni-cfg\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784311     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-lib-modules\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784353     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-xtables-lock\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784370     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-lib-modules\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784416     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-xtables-lock\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.830952     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229231 -n newest-cni-229231
E1017 21:18:28.664728  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229231 -n newest-cni-229231: exit status 2 (355.027697ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-229231 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw: exit status 1 (81.394026ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zsbw9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-d6hvz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9qflw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-229231
helpers_test.go:243: (dbg) docker inspect newest-cni-229231:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	        "Created": "2025-10-17T21:17:31.336902083Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 835267,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-17T21:18:10.112942858Z",
	            "FinishedAt": "2025-10-17T21:18:08.964830446Z"
	        },
	        "Image": "sha256:551264e61976f283a9fbfb2241e8ff3a6dda7ce0fb240891319c40d01d82fdd7",
	        "ResolvConfPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hostname",
	        "HostsPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/hosts",
	        "LogPath": "/var/lib/docker/containers/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4/b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4-json.log",
	        "Name": "/newest-cni-229231",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-229231:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-229231",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b47c768b4eb724d59a3d6c7ecb06fc8a90f6e21b9e1f30a52688e930103168d4",
	                "LowerDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc-init/diff:/var/lib/docker/overlay2/6da1ffb167d4de5a902f8d65446ed03542477b6c5fe451e82ebc8aeb2535b01f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5509371c4e44589fae4c884e397fc9474f0d72c2345e4b5ebe613a193f67a4cc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-229231",
	                "Source": "/var/lib/docker/volumes/newest-cni-229231/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-229231",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-229231",
	                "name.minikube.sigs.k8s.io": "newest-cni-229231",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9d687b5239424d80c61143f40c4dc6ea8f4218bf3cff6914a5e2ffa115cadccc",
	            "SandboxKey": "/var/run/docker/netns/9d687b523942",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33869"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33870"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33873"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33871"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33872"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-229231": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ca:72:e0:cc:04:fa",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d7c3e6b3b2a4a2268a255e36474804d31f93559fc2897f501a551059144a9568",
	                    "EndpointID": "ad6530b66d76ae1bec7a8989065957b2b9e61f3ef6b3a8d0664193e88939dd0f",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-229231",
	                        "b47c768b4eb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231: exit status 2 (336.418202ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25
E1017 21:18:29.946739  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/no-preload-820018/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-229231 logs -n 25: (1.191624121s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-028827                                                                                                                                                                                                               │ disable-driver-mounts-028827 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:15 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-629583 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │                     │
	│ stop    │ -p embed-certs-629583 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:15 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-332023 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-332023 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:16 UTC │
	│ start   │ -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:16 UTC │ 17 Oct 25 21:17 UTC │
	│ image   │ embed-certs-629583 image list --format=json                                                                                                                                                                                                   │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ pause   │ -p embed-certs-629583 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │                     │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ delete  │ -p embed-certs-629583                                                                                                                                                                                                                         │ embed-certs-629583           │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:17 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:17 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ default-k8s-diff-port-332023 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p default-k8s-diff-port-332023 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-229231 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	│ stop    │ -p newest-cni-229231 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-332023                                                                                                                                                                                                               │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-229231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ start   │ -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ delete  │ -p default-k8s-diff-port-332023                                                                                                                                                                                                               │ default-k8s-diff-port-332023 │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ image   │ newest-cni-229231 image list --format=json                                                                                                                                                                                                    │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │ 17 Oct 25 21:18 UTC │
	│ pause   │ -p newest-cni-229231 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-229231            │ jenkins │ v1.37.0 │ 17 Oct 25 21:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 21:18:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 21:18:09.683088  835071 out.go:360] Setting OutFile to fd 1 ...
	I1017 21:18:09.683282  835071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:09.683295  835071 out.go:374] Setting ErrFile to fd 2...
	I1017 21:18:09.683301  835071 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 21:18:09.683561  835071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 21:18:09.683927  835071 out.go:368] Setting JSON to false
	I1017 21:18:09.685750  835071 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":14436,"bootTime":1760721454,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 21:18:09.685855  835071 start.go:141] virtualization:  
	I1017 21:18:09.689802  835071 out.go:179] * [newest-cni-229231] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 21:18:09.692843  835071 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 21:18:09.692995  835071 notify.go:220] Checking for updates...
	I1017 21:18:09.701576  835071 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 21:18:09.704448  835071 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:09.707438  835071 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 21:18:09.710712  835071 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 21:18:09.713844  835071 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 21:18:09.717511  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:09.718289  835071 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 21:18:09.771027  835071 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 21:18:09.771157  835071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:09.894634  835071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-10-17 21:18:09.882351105 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:09.894753  835071 docker.go:318] overlay module found
	I1017 21:18:09.898386  835071 out.go:179] * Using the docker driver based on existing profile
	I1017 21:18:09.901282  835071 start.go:305] selected driver: docker
	I1017 21:18:09.901301  835071 start.go:925] validating driver "docker" against &{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:09.901417  835071 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 21:18:09.902078  835071 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 21:18:10.001282  835071 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:58 SystemTime:2025-10-17 21:18:09.9920263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 21:18:10.001673  835071 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:10.001718  835071 cni.go:84] Creating CNI manager for ""
	I1017 21:18:10.001780  835071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:18:10.001830  835071 start.go:349] cluster config:
	{Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:10.005066  835071 out.go:179] * Starting "newest-cni-229231" primary control-plane node in "newest-cni-229231" cluster
	I1017 21:18:10.008194  835071 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 21:18:10.011300  835071 out.go:179] * Pulling base image v0.0.48-1760609789-21757 ...
	I1017 21:18:10.014307  835071 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 21:18:10.014268  835071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:18:10.014389  835071 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1017 21:18:10.014400  835071 cache.go:58] Caching tarball of preloaded images
	I1017 21:18:10.014491  835071 preload.go:233] Found /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1017 21:18:10.014500  835071 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1017 21:18:10.014656  835071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:18:10.045452  835071 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon, skipping pull
	I1017 21:18:10.045475  835071 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 exists in daemon, skipping load
	I1017 21:18:10.045493  835071 cache.go:232] Successfully downloaded all kic artifacts
	I1017 21:18:10.045517  835071 start.go:360] acquireMachinesLock for newest-cni-229231: {Name:mk13ee1c4f50a5b33a03132c2a1b074ef28a6cf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1017 21:18:10.045577  835071 start.go:364] duration metric: took 40.525µs to acquireMachinesLock for "newest-cni-229231"
	I1017 21:18:10.045598  835071 start.go:96] Skipping create...Using existing machine configuration
	I1017 21:18:10.045604  835071 fix.go:54] fixHost starting: 
	I1017 21:18:10.045887  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:10.068246  835071 fix.go:112] recreateIfNeeded on newest-cni-229231: state=Stopped err=<nil>
	W1017 21:18:10.068276  835071 fix.go:138] unexpected machine state, will restart: <nil>
	I1017 21:18:10.072040  835071 out.go:252] * Restarting existing docker container for "newest-cni-229231" ...
	I1017 21:18:10.072149  835071 cli_runner.go:164] Run: docker start newest-cni-229231
	I1017 21:18:10.353459  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:10.378306  835071 kic.go:430] container "newest-cni-229231" state is running.
	I1017 21:18:10.378757  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:10.398921  835071 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/config.json ...
	I1017 21:18:10.399305  835071 machine.go:93] provisionDockerMachine start ...
	I1017 21:18:10.399372  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:10.429645  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:10.429961  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:10.429970  835071 main.go:141] libmachine: About to run SSH command:
	hostname
	I1017 21:18:10.431225  835071 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56936->127.0.0.1:33869: read: connection reset by peer
	I1017 21:18:13.582761  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:18:13.582792  835071 ubuntu.go:182] provisioning hostname "newest-cni-229231"
	I1017 21:18:13.582872  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:13.600799  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:13.601126  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:13.601142  835071 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-229231 && echo "newest-cni-229231" | sudo tee /etc/hostname
	I1017 21:18:13.760450  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-229231
	
	I1017 21:18:13.760543  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:13.778118  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:13.778428  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:13.778445  835071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-229231' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-229231/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-229231' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1017 21:18:13.927507  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1017 21:18:13.927534  835071 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21664-584308/.minikube CaCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21664-584308/.minikube}
	I1017 21:18:13.927561  835071 ubuntu.go:190] setting up certificates
	I1017 21:18:13.927571  835071 provision.go:84] configureAuth start
	I1017 21:18:13.927635  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:13.946241  835071 provision.go:143] copyHostCerts
	I1017 21:18:13.946311  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem, removing ...
	I1017 21:18:13.946332  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem
	I1017 21:18:13.946411  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/ca.pem (1082 bytes)
	I1017 21:18:13.946517  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem, removing ...
	I1017 21:18:13.946529  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem
	I1017 21:18:13.946554  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/cert.pem (1123 bytes)
	I1017 21:18:13.946622  835071 exec_runner.go:144] found /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem, removing ...
	I1017 21:18:13.946632  835071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem
	I1017 21:18:13.946656  835071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21664-584308/.minikube/key.pem (1675 bytes)
	I1017 21:18:13.946706  835071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem org=jenkins.newest-cni-229231 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-229231]
	I1017 21:18:15.202347  835071 provision.go:177] copyRemoteCerts
	I1017 21:18:15.202415  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1017 21:18:15.202458  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.219649  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.323018  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1017 21:18:15.341933  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1017 21:18:15.359661  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1017 21:18:15.376675  835071 provision.go:87] duration metric: took 1.449075798s to configureAuth
	I1017 21:18:15.376705  835071 ubuntu.go:206] setting minikube options for container-runtime
	I1017 21:18:15.376890  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:15.376999  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.394038  835071 main.go:141] libmachine: Using SSH client type: native
	I1017 21:18:15.394352  835071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33869 <nil> <nil>}
	I1017 21:18:15.394372  835071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1017 21:18:15.683265  835071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1017 21:18:15.683290  835071 machine.go:96] duration metric: took 5.283969937s to provisionDockerMachine
	I1017 21:18:15.683302  835071 start.go:293] postStartSetup for "newest-cni-229231" (driver="docker")
	I1017 21:18:15.683313  835071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1017 21:18:15.683375  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1017 21:18:15.683420  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.700501  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.803351  835071 ssh_runner.go:195] Run: cat /etc/os-release
	I1017 21:18:15.806687  835071 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1017 21:18:15.806714  835071 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1017 21:18:15.806725  835071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/addons for local assets ...
	I1017 21:18:15.806776  835071 filesync.go:126] Scanning /home/jenkins/minikube-integration/21664-584308/.minikube/files for local assets ...
	I1017 21:18:15.806860  835071 filesync.go:149] local asset: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem -> 5861722.pem in /etc/ssl/certs
	I1017 21:18:15.806957  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1017 21:18:15.814275  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:18:15.832196  835071 start.go:296] duration metric: took 148.879057ms for postStartSetup
	I1017 21:18:15.832289  835071 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 21:18:15.832335  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.849401  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:15.947917  835071 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1017 21:18:15.952685  835071 fix.go:56] duration metric: took 5.907073888s for fixHost
	I1017 21:18:15.952706  835071 start.go:83] releasing machines lock for "newest-cni-229231", held for 5.907121101s
	I1017 21:18:15.952772  835071 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-229231
	I1017 21:18:15.970072  835071 ssh_runner.go:195] Run: cat /version.json
	I1017 21:18:15.970141  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.970150  835071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1017 21:18:15.970204  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:15.995301  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:16.000754  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:16.099260  835071 ssh_runner.go:195] Run: systemctl --version
	I1017 21:18:16.194019  835071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1017 21:18:16.230747  835071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1017 21:18:16.235786  835071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1017 21:18:16.235917  835071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1017 21:18:16.244140  835071 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1017 21:18:16.244166  835071 start.go:495] detecting cgroup driver to use...
	I1017 21:18:16.244230  835071 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1017 21:18:16.244293  835071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1017 21:18:16.259633  835071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1017 21:18:16.272748  835071 docker.go:218] disabling cri-docker service (if available) ...
	I1017 21:18:16.272810  835071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1017 21:18:16.288371  835071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1017 21:18:16.301601  835071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1017 21:18:16.409054  835071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1017 21:18:16.526124  835071 docker.go:234] disabling docker service ...
	I1017 21:18:16.526224  835071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1017 21:18:16.542534  835071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1017 21:18:16.555769  835071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1017 21:18:16.669507  835071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1017 21:18:16.782898  835071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1017 21:18:16.795661  835071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1017 21:18:16.809972  835071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1017 21:18:16.810069  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.818995  835071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1017 21:18:16.819093  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.828235  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.837217  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.846151  835071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1017 21:18:16.854518  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.863549  835071 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.873307  835071 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1017 21:18:16.882358  835071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1017 21:18:16.890026  835071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1017 21:18:16.897695  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:17.011639  835071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1017 21:18:17.156380  835071 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1017 21:18:17.156468  835071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1017 21:18:17.160428  835071 start.go:563] Will wait 60s for crictl version
	I1017 21:18:17.160499  835071 ssh_runner.go:195] Run: which crictl
	I1017 21:18:17.164126  835071 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1017 21:18:17.188927  835071 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1017 21:18:17.189027  835071 ssh_runner.go:195] Run: crio --version
	I1017 21:18:17.218905  835071 ssh_runner.go:195] Run: crio --version
	I1017 21:18:17.252967  835071 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1017 21:18:17.255914  835071 cli_runner.go:164] Run: docker network inspect newest-cni-229231 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1017 21:18:17.270038  835071 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1017 21:18:17.273988  835071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:18:17.286791  835071 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1017 21:18:17.289661  835071 kubeadm.go:883] updating cluster {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1017 21:18:17.289796  835071 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1017 21:18:17.289876  835071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:18:17.321989  835071 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:18:17.322014  835071 crio.go:433] Images already preloaded, skipping extraction
	I1017 21:18:17.322073  835071 ssh_runner.go:195] Run: sudo crictl images --output json
	I1017 21:18:17.348643  835071 crio.go:514] all images are preloaded for cri-o runtime.
	I1017 21:18:17.348668  835071 cache_images.go:85] Images are preloaded, skipping loading
	I1017 21:18:17.348675  835071 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1017 21:18:17.348786  835071 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-229231 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1017 21:18:17.348871  835071 ssh_runner.go:195] Run: crio config
	I1017 21:18:17.421946  835071 cni.go:84] Creating CNI manager for ""
	I1017 21:18:17.421973  835071 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 21:18:17.421990  835071 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1017 21:18:17.422047  835071 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-229231 NodeName:newest-cni-229231 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1017 21:18:17.422217  835071 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-229231"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1017 21:18:17.422306  835071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1017 21:18:17.430035  835071 binaries.go:44] Found k8s binaries, skipping transfer
	I1017 21:18:17.430151  835071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1017 21:18:17.437591  835071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1017 21:18:17.449887  835071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1017 21:18:17.461948  835071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1017 21:18:17.474446  835071 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1017 21:18:17.478087  835071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1017 21:18:17.490763  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:17.613111  835071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:17.628792  835071 certs.go:69] Setting up /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231 for IP: 192.168.76.2
	I1017 21:18:17.628814  835071 certs.go:195] generating shared ca certs ...
	I1017 21:18:17.628832  835071 certs.go:227] acquiring lock for ca certs: {Name:mkb8e0179026fcc6ee893105cdfe6f791df6f3d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:17.629049  835071 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key
	I1017 21:18:17.629115  835071 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key
	I1017 21:18:17.629129  835071 certs.go:257] generating profile certs ...
	I1017 21:18:17.629235  835071 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/client.key
	I1017 21:18:17.629323  835071 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key.c388d62c
	I1017 21:18:17.629385  835071 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key
	I1017 21:18:17.629534  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem (1338 bytes)
	W1017 21:18:17.629588  835071 certs.go:480] ignoring /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172_empty.pem, impossibly tiny 0 bytes
	I1017 21:18:17.629600  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca-key.pem (1675 bytes)
	I1017 21:18:17.629627  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/ca.pem (1082 bytes)
	I1017 21:18:17.629671  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/cert.pem (1123 bytes)
	I1017 21:18:17.629702  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/certs/key.pem (1675 bytes)
	I1017 21:18:17.629766  835071 certs.go:484] found cert: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem (1708 bytes)
	I1017 21:18:17.630393  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1017 21:18:17.652656  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1017 21:18:17.670641  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1017 21:18:17.688391  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1017 21:18:17.708401  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1017 21:18:17.744932  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1017 21:18:17.765572  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1017 21:18:17.791919  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/newest-cni-229231/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1017 21:18:17.812063  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/ssl/certs/5861722.pem --> /usr/share/ca-certificates/5861722.pem (1708 bytes)
	I1017 21:18:17.833593  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1017 21:18:17.853713  835071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21664-584308/.minikube/certs/586172.pem --> /usr/share/ca-certificates/586172.pem (1338 bytes)
	I1017 21:18:17.872941  835071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1017 21:18:17.886328  835071 ssh_runner.go:195] Run: openssl version
	I1017 21:18:17.892404  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5861722.pem && ln -fs /usr/share/ca-certificates/5861722.pem /etc/ssl/certs/5861722.pem"
	I1017 21:18:17.900660  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.904221  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 17 20:04 /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.904321  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5861722.pem
	I1017 21:18:17.950595  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/5861722.pem /etc/ssl/certs/3ec20f2e.0"
	I1017 21:18:17.958432  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1017 21:18:17.966291  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:17.970087  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 17 19:57 /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:17.970153  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1017 21:18:18.011428  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1017 21:18:18.020176  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/586172.pem && ln -fs /usr/share/ca-certificates/586172.pem /etc/ssl/certs/586172.pem"
	I1017 21:18:18.029391  835071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.033328  835071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 17 20:04 /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.033447  835071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/586172.pem
	I1017 21:18:18.076482  835071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/586172.pem /etc/ssl/certs/51391683.0"
	I1017 21:18:18.084836  835071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1017 21:18:18.088857  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1017 21:18:18.129952  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1017 21:18:18.171403  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1017 21:18:18.212316  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1017 21:18:18.256034  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1017 21:18:18.306794  835071 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1017 21:18:18.356770  835071 kubeadm.go:400] StartCluster: {Name:newest-cni-229231 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-229231 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 21:18:18.356864  835071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1017 21:18:18.356984  835071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1017 21:18:18.440978  835071 cri.go:89] found id: "f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594"
	I1017 21:18:18.441001  835071 cri.go:89] found id: "b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5"
	I1017 21:18:18.441005  835071 cri.go:89] found id: "3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab"
	I1017 21:18:18.441009  835071 cri.go:89] found id: "0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82"
	I1017 21:18:18.441012  835071 cri.go:89] found id: ""
	I1017 21:18:18.441092  835071 ssh_runner.go:195] Run: sudo runc list -f json
	W1017 21:18:18.467663  835071 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T21:18:18Z" level=error msg="open /run/runc: no such file or directory"
	I1017 21:18:18.467784  835071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1017 21:18:18.486868  835071 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1017 21:18:18.486891  835071 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1017 21:18:18.486980  835071 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1017 21:18:18.499759  835071 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1017 21:18:18.500206  835071 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-229231" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:18.500342  835071 kubeconfig.go:62] /home/jenkins/minikube-integration/21664-584308/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-229231" cluster setting kubeconfig missing "newest-cni-229231" context setting]
	I1017 21:18:18.500653  835071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.502976  835071 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1017 21:18:18.513961  835071 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1017 21:18:18.513998  835071 kubeadm.go:601] duration metric: took 27.100646ms to restartPrimaryControlPlane
	I1017 21:18:18.514007  835071 kubeadm.go:402] duration metric: took 157.248919ms to StartCluster
	I1017 21:18:18.514043  835071 settings.go:142] acquiring lock: {Name:mkc66fa22d4b34915752317baa792b17686eddf2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.514126  835071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 21:18:18.514758  835071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/kubeconfig: {Name:mk31e33ecc4bc1f0d428a3190b733f92acdfe75d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 21:18:18.515004  835071 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1017 21:18:18.515326  835071 config.go:182] Loaded profile config "newest-cni-229231": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 21:18:18.515482  835071 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1017 21:18:18.515591  835071 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-229231"
	I1017 21:18:18.515611  835071 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-229231"
	W1017 21:18:18.515617  835071 addons.go:247] addon storage-provisioner should already be in state true
	I1017 21:18:18.515617  835071 addons.go:69] Setting dashboard=true in profile "newest-cni-229231"
	I1017 21:18:18.515640  835071 addons.go:238] Setting addon dashboard=true in "newest-cni-229231"
	W1017 21:18:18.515648  835071 addons.go:247] addon dashboard should already be in state true
	I1017 21:18:18.515639  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.515673  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.516104  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.516405  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.516963  835071 addons.go:69] Setting default-storageclass=true in profile "newest-cni-229231"
	I1017 21:18:18.516985  835071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-229231"
	I1017 21:18:18.517257  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.520393  835071 out.go:179] * Verifying Kubernetes components...
	I1017 21:18:18.523655  835071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1017 21:18:18.577440  835071 addons.go:238] Setting addon default-storageclass=true in "newest-cni-229231"
	W1017 21:18:18.577464  835071 addons.go:247] addon default-storageclass should already be in state true
	I1017 21:18:18.577488  835071 host.go:66] Checking if "newest-cni-229231" exists ...
	I1017 21:18:18.577926  835071 cli_runner.go:164] Run: docker container inspect newest-cni-229231 --format={{.State.Status}}
	I1017 21:18:18.581800  835071 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1017 21:18:18.585090  835071 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1017 21:18:18.585231  835071 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:18.585242  835071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1017 21:18:18.585309  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.594374  835071 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1017 21:18:18.602053  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1017 21:18:18.602082  835071 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1017 21:18:18.602154  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.640959  835071 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:18.640983  835071 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1017 21:18:18.640981  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.641047  835071 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-229231
	I1017 21:18:18.663376  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.681589  835071 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33869 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/newest-cni-229231/id_rsa Username:docker}
	I1017 21:18:18.804188  835071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1017 21:18:18.847906  835071 api_server.go:52] waiting for apiserver process to appear ...
	I1017 21:18:18.848063  835071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 21:18:18.864637  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1017 21:18:18.902834  835071 api_server.go:72] duration metric: took 387.792036ms to wait for apiserver process to appear ...
	I1017 21:18:18.902887  835071 api_server.go:88] waiting for apiserver healthz status ...
	I1017 21:18:18.902909  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:18.949586  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1017 21:18:18.949610  835071 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1017 21:18:18.989411  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1017 21:18:19.009738  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1017 21:18:19.009775  835071 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1017 21:18:19.065705  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1017 21:18:19.065732  835071 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1017 21:18:19.100959  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1017 21:18:19.100984  835071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1017 21:18:19.170662  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1017 21:18:19.170687  835071 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1017 21:18:19.197995  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1017 21:18:19.198031  835071 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1017 21:18:19.215850  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1017 21:18:19.215875  835071 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1017 21:18:19.231573  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1017 21:18:19.231598  835071 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1017 21:18:19.244778  835071 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:18:19.244816  835071 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1017 21:18:19.257451  835071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1017 21:18:22.415806  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 21:18:22.415832  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 21:18:22.415846  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:22.544604  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1017 21:18:22.544629  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1017 21:18:22.903216  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:22.936299  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:22.936380  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.403618  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:23.420769  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:23.420857  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.903254  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:23.915896  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1017 21:18:23.915922  835071 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1017 21:18:23.982699  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.117981493s)
	I1017 21:18:23.982756  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.99332416s)
	I1017 21:18:23.983221  835071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.725735034s)
	I1017 21:18:23.986362  835071 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-229231 addons enable metrics-server
	
	I1017 21:18:24.006681  835071 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1017 21:18:24.009674  835071 addons.go:514] duration metric: took 5.494174772s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1017 21:18:24.403011  835071 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1017 21:18:24.411065  835071 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1017 21:18:24.412146  835071 api_server.go:141] control plane version: v1.34.1
	I1017 21:18:24.412169  835071 api_server.go:131] duration metric: took 5.509273156s to wait for apiserver health ...
	I1017 21:18:24.412178  835071 system_pods.go:43] waiting for kube-system pods to appear ...
	I1017 21:18:24.415721  835071 system_pods.go:59] 8 kube-system pods found
	I1017 21:18:24.415757  835071 system_pods.go:61] "coredns-66bc5c9577-zsbw9" [ab5b72a4-6a5d-4f98-9f27-a6b79f1c56cf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:24.415767  835071 system_pods.go:61] "etcd-newest-cni-229231" [1972c4be-a973-41cd-a7db-f940c7bfedcc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1017 21:18:24.415773  835071 system_pods.go:61] "kindnet-lwztk" [1ce01431-d96e-4be0-aee9-f5172d35f7a0] Running
	I1017 21:18:24.415782  835071 system_pods.go:61] "kube-apiserver-newest-cni-229231" [bc06de01-5287-4d5d-9c16-8917e6f62b6c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1017 21:18:24.415788  835071 system_pods.go:61] "kube-controller-manager-newest-cni-229231" [62b40139-100e-4c66-827d-de841c45bc92] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1017 21:18:24.415799  835071 system_pods.go:61] "kube-proxy-ws4mh" [66800a1d-51bc-41d0-9811-463a149fc9cd] Running
	I1017 21:18:24.415809  835071 system_pods.go:61] "kube-scheduler-newest-cni-229231" [2b082865-cbcb-428b-b44b-77e744c7e89b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1017 21:18:24.415815  835071 system_pods.go:61] "storage-provisioner" [8a3d6e07-be6d-445a-b6af-7ef77edb6905] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1017 21:18:24.415824  835071 system_pods.go:74] duration metric: took 3.640235ms to wait for pod list to return data ...
	I1017 21:18:24.415835  835071 default_sa.go:34] waiting for default service account to be created ...
	I1017 21:18:24.418567  835071 default_sa.go:45] found service account: "default"
	I1017 21:18:24.418591  835071 default_sa.go:55] duration metric: took 2.747431ms for default service account to be created ...
	I1017 21:18:24.418604  835071 kubeadm.go:586] duration metric: took 5.903568379s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1017 21:18:24.418620  835071 node_conditions.go:102] verifying NodePressure condition ...
	I1017 21:18:24.421341  835071 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1017 21:18:24.421376  835071 node_conditions.go:123] node cpu capacity is 2
	I1017 21:18:24.421388  835071 node_conditions.go:105] duration metric: took 2.762234ms to run NodePressure ...
	I1017 21:18:24.421400  835071 start.go:241] waiting for startup goroutines ...
	I1017 21:18:24.421407  835071 start.go:246] waiting for cluster config update ...
	I1017 21:18:24.421418  835071 start.go:255] writing updated cluster config ...
	I1017 21:18:24.421713  835071 ssh_runner.go:195] Run: rm -f paused
	I1017 21:18:24.479242  835071 start.go:624] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1017 21:18:24.482580  835071 out.go:179] * Done! kubectl is now configured to use "newest-cni-229231" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.044064988Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.047793371Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cd503ea1-0062-4b9b-b227-5e768ad97577 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.053742393Z" level=info msg="Ran pod sandbox 1e8cae85f7d560c594bff08ef654496b5c0fdc493fc6667022dbc31835369b4f with infra container: kube-system/kindnet-lwztk/POD" id=cd503ea1-0062-4b9b-b227-5e768ad97577 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.055671032Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0e5aae20-620e-43c0-ad94-265853a8bfb6 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.059334996Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=83138513-ff82-413c-a5d9-7c72e604ab1d name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.063445701Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-ws4mh/POD" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.063510227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.066049631Z" level=info msg="Creating container: kube-system/kindnet-lwztk/kindnet-cni" id=5b8505e7-5a3d-451c-834f-6576ad368da2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.074959141Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.075039323Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.084440794Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.093675978Z" level=info msg="Ran pod sandbox f9766ad208a98b05b63e91ef199337b10e7f4de6d8e851891a80ffb3fb103c3e with infra container: kube-system/kube-proxy-ws4mh/POD" id=df3c88e1-10f7-48b7-808b-036b1aef6655 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.095357911Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e1846cdb-3329-49f3-bfea-4e718ff61bb0 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.102029535Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.10567246Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e05dc629-f8d1-4412-830b-652a80e545a8 name=/runtime.v1.ImageService/ImageStatus
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.106972366Z" level=info msg="Creating container: kube-system/kube-proxy-ws4mh/kube-proxy" id=6fb3af88-3dd4-4aa0-81c6-121df5bfee94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.107390291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.131537325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.133149948Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.133817246Z" level=info msg="Created container 19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318: kube-system/kindnet-lwztk/kindnet-cni" id=5b8505e7-5a3d-451c-834f-6576ad368da2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.134450682Z" level=info msg="Starting container: 19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318" id=e7fe7902-cf9a-40ae-9637-e84a50914285 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.138632921Z" level=info msg="Started container" PID=1056 containerID=19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318 description=kube-system/kindnet-lwztk/kindnet-cni id=e7fe7902-cf9a-40ae-9637-e84a50914285 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1e8cae85f7d560c594bff08ef654496b5c0fdc493fc6667022dbc31835369b4f
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.203208906Z" level=info msg="Created container 17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a: kube-system/kube-proxy-ws4mh/kube-proxy" id=6fb3af88-3dd4-4aa0-81c6-121df5bfee94 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.205898441Z" level=info msg="Starting container: 17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a" id=410d5278-0e14-4941-a4d0-77c4c32f9474 name=/runtime.v1.RuntimeService/StartContainer
	Oct 17 21:18:23 newest-cni-229231 crio[610]: time="2025-10-17T21:18:23.20930239Z" level=info msg="Started container" PID=1067 containerID=17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a description=kube-system/kube-proxy-ws4mh/kube-proxy id=410d5278-0e14-4941-a4d0-77c4c32f9474 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9766ad208a98b05b63e91ef199337b10e7f4de6d8e851891a80ffb3fb103c3e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	17e82b9eed82a       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   7 seconds ago       Running             kube-proxy                1                   f9766ad208a98       kube-proxy-ws4mh                            kube-system
	19ab6e5263c20       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   7 seconds ago       Running             kindnet-cni               1                   1e8cae85f7d56       kindnet-lwztk                               kube-system
	f48d3a5ef287a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   11 seconds ago      Running             kube-controller-manager   1                   4d26f408aa4b8       kube-controller-manager-newest-cni-229231   kube-system
	b71d84c8ecd33       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   11 seconds ago      Running             kube-apiserver            1                   751e4e81f0046       kube-apiserver-newest-cni-229231            kube-system
	3ea8f70e077ed       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   11 seconds ago      Running             kube-scheduler            1                   3203a067c0120       kube-scheduler-newest-cni-229231            kube-system
	0b7d129df455b       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   11 seconds ago      Running             etcd                      1                   832bc3b1d0917       etcd-newest-cni-229231                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-229231
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-229231
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=1ce431d127508e786aafb40a181eff57a5af17f0
	                    minikube.k8s.io/name=newest-cni-229231
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_17T21_17_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 17 Oct 2025 21:17:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-229231
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 17 Oct 2025 21:18:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Fri, 17 Oct 2025 21:18:22 +0000   Fri, 17 Oct 2025 21:17:51 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-229231
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 c52191f5187031740f634bad68f0c727
	  System UUID:                a4099547-187b-4e47-bfa5-074f4f8fb46b
	  Boot ID:                    571a5863-a2dd-484d-8a17-263cc3da9adf
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-229231                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-lwztk                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-229231             250m (12%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-newest-cni-229231    200m (10%)    0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-proxy-ws4mh                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-229231             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Normal   Starting                 6s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 40s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 40s                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     33s                kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           28s                node-controller  Node newest-cni-229231 event: Registered Node newest-cni-229231 in Controller
	  Normal   Starting                 13s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-229231 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-229231 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-229231 event: Registered Node newest-cni-229231 in Controller
	
	
	==> dmesg <==
	[Oct17 20:55] overlayfs: idmapped layers are currently not supported
	[  +0.065655] overlayfs: idmapped layers are currently not supported
	[Oct17 20:57] overlayfs: idmapped layers are currently not supported
	[ +47.825184] overlayfs: idmapped layers are currently not supported
	[  +1.758806] overlayfs: idmapped layers are currently not supported
	[Oct17 20:58] overlayfs: idmapped layers are currently not supported
	[Oct17 20:59] overlayfs: idmapped layers are currently not supported
	[ +24.759001] overlayfs: idmapped layers are currently not supported
	[Oct17 21:05] overlayfs: idmapped layers are currently not supported
	[ +47.228569] overlayfs: idmapped layers are currently not supported
	[Oct17 21:07] overlayfs: idmapped layers are currently not supported
	[Oct17 21:08] overlayfs: idmapped layers are currently not supported
	[ +44.011146] overlayfs: idmapped layers are currently not supported
	[Oct17 21:09] overlayfs: idmapped layers are currently not supported
	[Oct17 21:10] overlayfs: idmapped layers are currently not supported
	[Oct17 21:11] overlayfs: idmapped layers are currently not supported
	[Oct17 21:12] overlayfs: idmapped layers are currently not supported
	[ +33.710626] overlayfs: idmapped layers are currently not supported
	[Oct17 21:13] overlayfs: idmapped layers are currently not supported
	[Oct17 21:14] overlayfs: idmapped layers are currently not supported
	[Oct17 21:15] overlayfs: idmapped layers are currently not supported
	[Oct17 21:16] overlayfs: idmapped layers are currently not supported
	[ +39.491005] overlayfs: idmapped layers are currently not supported
	[Oct17 21:17] overlayfs: idmapped layers are currently not supported
	[Oct17 21:18] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [0b7d129df455b6a3e0f34cec955c1fcaa67f5320e3830955998575d292889e82] <==
	{"level":"warn","ts":"2025-10-17T21:18:21.021844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.052375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.087075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.115534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.191671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.201299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.222470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.244682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.252085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.275297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.297149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.322411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.336596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.351333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.367892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.382973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.401011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.424337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.434273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.456210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.469645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.502323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.525434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.558459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-17T21:18:21.619163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 21:18:30 up  4:00,  0 user,  load average: 4.28, 3.77, 3.30
	Linux newest-cni-229231 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [19ab6e5263c2016d5063153123991c9f1193ee021f8e6f349963d075ed5b7318] <==
	I1017 21:18:23.230104       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1017 21:18:23.230371       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1017 21:18:23.230484       1 main.go:148] setting mtu 1500 for CNI 
	I1017 21:18:23.230496       1 main.go:178] kindnetd IP family: "ipv4"
	I1017 21:18:23.230506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-17T21:18:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1017 21:18:23.429715       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1017 21:18:23.429750       1 controller.go:381] "Waiting for informer caches to sync"
	I1017 21:18:23.429759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1017 21:18:23.430067       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [b71d84c8ecd33d1833396a7fd42abd75401da17f2fc4116acb8d4b0a51ae20c5] <==
	I1017 21:18:22.673729       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1017 21:18:22.682097       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1017 21:18:22.685648       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1017 21:18:22.688610       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1017 21:18:22.688737       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1017 21:18:22.688804       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1017 21:18:22.689197       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1017 21:18:22.689363       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1017 21:18:22.689370       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1017 21:18:22.700208       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1017 21:18:22.719887       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1017 21:18:22.725077       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1017 21:18:22.819131       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1017 21:18:23.391709       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1017 21:18:23.630044       1 controller.go:667] quota admission added evaluator for: namespaces
	I1017 21:18:23.714889       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1017 21:18:23.776997       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1017 21:18:23.795592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1017 21:18:23.901580       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.229.156"}
	I1017 21:18:23.954930       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.180.252"}
	I1017 21:18:26.022986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1017 21:18:26.470542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:18:26.470542       1 controller.go:667] quota admission added evaluator for: endpoints
	I1017 21:18:26.579475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1017 21:18:26.722539       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [f48d3a5ef287af2305748dffdf52d8cb533ac11f6b89f6965c9a7d95699a8594] <==
	I1017 21:18:26.023806       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1017 21:18:26.025404       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1017 21:18:26.026666       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1017 21:18:26.030349       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1017 21:18:26.030591       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1017 21:18:26.031863       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1017 21:18:26.031964       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1017 21:18:26.032029       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1017 21:18:26.035849       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1017 21:18:26.035950       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1017 21:18:26.037974       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1017 21:18:26.041357       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1017 21:18:26.045436       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1017 21:18:26.046823       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1017 21:18:26.057432       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:18:26.057537       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1017 21:18:26.057575       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1017 21:18:26.064533       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1017 21:18:26.064742       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1017 21:18:26.065347       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1017 21:18:26.067022       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1017 21:18:26.067321       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1017 21:18:26.074387       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1017 21:18:26.081411       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1017 21:18:26.584948       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [17e82b9eed82a9ce71295926efc5a3b6ab739085493b6542ecbf8ece6e35e95a] <==
	I1017 21:18:23.326445       1 server_linux.go:53] "Using iptables proxy"
	I1017 21:18:23.493496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1017 21:18:23.595685       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1017 21:18:23.600954       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1017 21:18:23.601060       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1017 21:18:23.726606       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1017 21:18:23.726732       1 server_linux.go:132] "Using iptables Proxier"
	I1017 21:18:23.731491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1017 21:18:23.734717       1 server.go:527] "Version info" version="v1.34.1"
	I1017 21:18:23.734820       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:18:23.736395       1 config.go:200] "Starting service config controller"
	I1017 21:18:23.736496       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1017 21:18:23.736557       1 config.go:106] "Starting endpoint slice config controller"
	I1017 21:18:23.736597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1017 21:18:23.736635       1 config.go:403] "Starting serviceCIDR config controller"
	I1017 21:18:23.736681       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1017 21:18:23.737350       1 config.go:309] "Starting node config controller"
	I1017 21:18:23.737428       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1017 21:18:23.737459       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1017 21:18:23.842764       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1017 21:18:23.863534       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1017 21:18:23.875987       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ea8f70e077edbd8639efc1caa73e03fc7f8a14927b6ccc097b5c5c4fa2e46ab] <==
	I1017 21:18:20.555223       1 serving.go:386] Generated self-signed cert in-memory
	I1017 21:18:22.972099       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1017 21:18:22.972137       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1017 21:18:22.988603       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1017 21:18:22.988726       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1017 21:18:22.988747       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1017 21:18:22.988778       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1017 21:18:22.991080       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:18:22.991094       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1017 21:18:22.995202       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:22.995244       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:23.088990       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1017 21:18:23.096407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1017 21:18:23.099595       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 17 21:18:19 newest-cni-229231 kubelet[726]: E1017 21:18:19.897896     726 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-229231\" not found" node="newest-cni-229231"
	Oct 17 21:18:20 newest-cni-229231 kubelet[726]: E1017 21:18:20.898774     726 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-229231\" not found" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.441866     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.673210     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-229231\" already exists" pod="kube-system/kube-scheduler-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.673253     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.709807     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-229231\" already exists" pod="kube-system/etcd-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.709843     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.732914     726 apiserver.go:52] "Watching apiserver"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.751262     726 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.753785     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-229231\" already exists" pod="kube-system/kube-apiserver-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.753811     726 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763820     726 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763924     726 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.763970     726 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.767794     726 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: E1017 21:18:22.784225     726 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-229231\" already exists" pod="kube-system/kube-controller-manager-newest-cni-229231"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784290     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-cni-cfg\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784311     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-lib-modules\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784353     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-xtables-lock\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784370     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/66800a1d-51bc-41d0-9811-463a149fc9cd-lib-modules\") pod \"kube-proxy-ws4mh\" (UID: \"66800a1d-51bc-41d0-9811-463a149fc9cd\") " pod="kube-system/kube-proxy-ws4mh"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.784416     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1ce01431-d96e-4be0-aee9-f5172d35f7a0-xtables-lock\") pod \"kindnet-lwztk\" (UID: \"1ce01431-d96e-4be0-aee9-f5172d35f7a0\") " pod="kube-system/kindnet-lwztk"
	Oct 17 21:18:22 newest-cni-229231 kubelet[726]: I1017 21:18:22.830952     726 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 17 21:18:25 newest-cni-229231 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229231 -n newest-cni-229231
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-229231 -n newest-cni-229231: exit status 2 (344.733252ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-229231 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw: exit status 1 (82.661825ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-zsbw9" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-d6hvz" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-9qflw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-229231 describe pod coredns-66bc5c9577-zsbw9 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-d6hvz kubernetes-dashboard-855c9754f9-9qflw: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.08s)

                                                
                                    

Test pass (257/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.52
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.1
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.1/json-events 4.92
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.39
18 TestDownloadOnly/v1.34.1/DeleteAll 0.38
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.24
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 175.47
31 TestAddons/serial/GCPAuth/Namespaces 0.25
32 TestAddons/serial/GCPAuth/FakeCredentials 9.77
48 TestAddons/StoppedEnableDisable 12.45
49 TestCertOptions 34.21
50 TestCertExpiration 237.63
52 TestForceSystemdFlag 51.09
53 TestForceSystemdEnv 40.12
59 TestErrorSpam/setup 33.92
60 TestErrorSpam/start 0.83
61 TestErrorSpam/status 1.2
62 TestErrorSpam/pause 6.26
63 TestErrorSpam/unpause 6.03
64 TestErrorSpam/stop 1.53
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 80.97
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 30.87
71 TestFunctional/serial/KubeContext 0.07
72 TestFunctional/serial/KubectlGetPods 0.11
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.42
76 TestFunctional/serial/CacheCmd/cache/add_local 1.09
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.07
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.11
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
84 TestFunctional/serial/ExtraConfig 37.7
85 TestFunctional/serial/ComponentHealth 0.11
86 TestFunctional/serial/LogsCmd 1.45
87 TestFunctional/serial/LogsFileCmd 1.48
88 TestFunctional/serial/InvalidService 4.07
90 TestFunctional/parallel/ConfigCmd 0.49
91 TestFunctional/parallel/DashboardCmd 11.08
92 TestFunctional/parallel/DryRun 0.46
93 TestFunctional/parallel/InternationalLanguage 0.23
94 TestFunctional/parallel/StatusCmd 1.12
99 TestFunctional/parallel/AddonsCmd 0.19
100 TestFunctional/parallel/PersistentVolumeClaim 26.95
102 TestFunctional/parallel/SSHCmd 0.72
103 TestFunctional/parallel/CpCmd 2.4
105 TestFunctional/parallel/FileSync 0.29
106 TestFunctional/parallel/CertSync 2.13
110 TestFunctional/parallel/NodeLabels 0.09
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.57
114 TestFunctional/parallel/License 0.37
116 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.68
117 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.47
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
121 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
125 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
128 TestFunctional/parallel/ProfileCmd/profile_list 0.42
129 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
130 TestFunctional/parallel/MountCmd/any-port 7.21
131 TestFunctional/parallel/MountCmd/specific-port 1.67
132 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
133 TestFunctional/parallel/ServiceCmd/List 1.36
134 TestFunctional/parallel/ServiceCmd/JSONOutput 1.44
138 TestFunctional/parallel/Version/short 0.08
139 TestFunctional/parallel/Version/components 1.28
140 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
141 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
142 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
143 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
144 TestFunctional/parallel/ImageCommands/ImageBuild 3.77
145 TestFunctional/parallel/ImageCommands/Setup 0.64
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
153 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 218.84
164 TestMultiControlPlane/serial/DeployApp 7.74
165 TestMultiControlPlane/serial/PingHostFromPods 1.45
166 TestMultiControlPlane/serial/AddWorkerNode 59.22
167 TestMultiControlPlane/serial/NodeLabels 0.1
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
169 TestMultiControlPlane/serial/CopyFile 20.08
170 TestMultiControlPlane/serial/StopSecondaryNode 12.9
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
172 TestMultiControlPlane/serial/RestartSecondaryNode 32.18
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.4
177 TestMultiControlPlane/serial/StopCluster 24.38
178 TestMultiControlPlane/serial/RestartCluster 89.22
179 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
180 TestMultiControlPlane/serial/AddSecondaryNode 84.56
181 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.07
185 TestJSONOutput/start/Command 79.23
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.84
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 64.95
211 TestKicCustomNetwork/use_default_bridge_network 38.08
212 TestKicExistingNetwork 36.8
213 TestKicCustomSubnet 40.09
214 TestKicStaticIP 36.67
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 80.34
219 TestMountStart/serial/StartWithMountFirst 9.56
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 7.83
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 8.79
227 TestMountStart/serial/VerifyMountPostStop 0.42
230 TestMultiNode/serial/FreshStart2Nodes 137.8
231 TestMultiNode/serial/DeployApp2Nodes 5.28
232 TestMultiNode/serial/PingHostFrom2Pods 0.9
233 TestMultiNode/serial/AddNode 57.95
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.73
236 TestMultiNode/serial/CopyFile 10.4
237 TestMultiNode/serial/StopNode 2.39
238 TestMultiNode/serial/StartAfterStop 8.67
239 TestMultiNode/serial/RestartKeepsNodes 79.34
240 TestMultiNode/serial/DeleteNode 5.65
241 TestMultiNode/serial/StopMultiNode 24.08
242 TestMultiNode/serial/RestartMultiNode 51.3
243 TestMultiNode/serial/ValidateNameConflict 37.27
248 TestPreload 129.96
250 TestScheduledStopUnix 105.05
253 TestInsufficientStorage 13.97
254 TestRunningBinaryUpgrade 60.53
256 TestKubernetesUpgrade 352.47
257 TestMissingContainerUpgrade 112.01
259 TestPause/serial/Start 92.93
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
262 TestNoKubernetes/serial/StartWithK8s 44.28
263 TestNoKubernetes/serial/StartWithStopK8s 7.73
264 TestNoKubernetes/serial/Start 8.96
265 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
266 TestNoKubernetes/serial/ProfileList 1.09
267 TestNoKubernetes/serial/Stop 1.31
268 TestNoKubernetes/serial/StartNoArgs 7.69
269 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
277 TestNetworkPlugins/group/false 3.73
281 TestPause/serial/SecondStartNoReconfiguration 30.5
283 TestStoppedBinaryUpgrade/Setup 1.24
284 TestStoppedBinaryUpgrade/Upgrade 62.97
292 TestNetworkPlugins/group/auto/Start 85.07
293 TestStoppedBinaryUpgrade/MinikubeLogs 1.21
294 TestNetworkPlugins/group/kindnet/Start 82.04
295 TestNetworkPlugins/group/auto/KubeletFlags 0.35
296 TestNetworkPlugins/group/auto/NetCatPod 12.36
297 TestNetworkPlugins/group/auto/DNS 0.17
298 TestNetworkPlugins/group/auto/Localhost 0.13
299 TestNetworkPlugins/group/auto/HairPin 0.13
300 TestNetworkPlugins/group/calico/Start 63
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
303 TestNetworkPlugins/group/kindnet/NetCatPod 12.32
304 TestNetworkPlugins/group/kindnet/DNS 0.2
305 TestNetworkPlugins/group/kindnet/Localhost 0.16
306 TestNetworkPlugins/group/kindnet/HairPin 0.17
307 TestNetworkPlugins/group/custom-flannel/Start 73.68
308 TestNetworkPlugins/group/calico/ControllerPod 6
309 TestNetworkPlugins/group/calico/KubeletFlags 0.38
310 TestNetworkPlugins/group/calico/NetCatPod 11.31
311 TestNetworkPlugins/group/calico/DNS 0.36
312 TestNetworkPlugins/group/calico/Localhost 0.18
313 TestNetworkPlugins/group/calico/HairPin 0.2
314 TestNetworkPlugins/group/enable-default-cni/Start 80.83
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.38
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
317 TestNetworkPlugins/group/custom-flannel/DNS 0.18
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
320 TestNetworkPlugins/group/flannel/Start 65.98
321 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.41
322 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
323 TestNetworkPlugins/group/enable-default-cni/DNS 0.57
324 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
325 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
326 TestNetworkPlugins/group/bridge/Start 76.84
327 TestNetworkPlugins/group/flannel/ControllerPod 6.01
328 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
329 TestNetworkPlugins/group/flannel/NetCatPod 10.3
330 TestNetworkPlugins/group/flannel/DNS 0.19
331 TestNetworkPlugins/group/flannel/Localhost 0.17
332 TestNetworkPlugins/group/flannel/HairPin 0.15
334 TestStartStop/group/old-k8s-version/serial/FirstStart 70.94
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
336 TestNetworkPlugins/group/bridge/NetCatPod 13.41
337 TestNetworkPlugins/group/bridge/DNS 0.17
338 TestNetworkPlugins/group/bridge/Localhost 0.18
339 TestNetworkPlugins/group/bridge/HairPin 0.18
341 TestStartStop/group/no-preload/serial/FirstStart 70.23
342 TestStartStop/group/old-k8s-version/serial/DeployApp 10.44
344 TestStartStop/group/old-k8s-version/serial/Stop 13.44
345 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
346 TestStartStop/group/old-k8s-version/serial/SecondStart 58.56
347 TestStartStop/group/no-preload/serial/DeployApp 9.39
349 TestStartStop/group/no-preload/serial/Stop 12.08
350 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
351 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
352 TestStartStop/group/no-preload/serial/SecondStart 51.63
353 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.16
354 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.4
357 TestStartStop/group/embed-certs/serial/FirstStart 94.19
358 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
359 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
360 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.27
363 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.27
364 TestStartStop/group/embed-certs/serial/DeployApp 9.31
366 TestStartStop/group/embed-certs/serial/Stop 12.3
367 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
368 TestStartStop/group/embed-certs/serial/SecondStart 51.75
369 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.45
371 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.34
372 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
373 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 57.72
374 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
375 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.14
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.37
379 TestStartStop/group/newest-cni/serial/FirstStart 38.94
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.11
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.28
384 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/Stop 1.53
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
388 TestStartStop/group/newest-cni/serial/SecondStart 15.3
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (5.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-011118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-011118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.523147598s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1017 19:57:17.718568  586172 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1017 19:57:17.718648  586172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-011118
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-011118: exit status 85 (100.376121ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-011118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-011118 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:57:12
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:57:12.240763  586177 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:57:12.240935  586177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:12.240947  586177 out.go:374] Setting ErrFile to fd 2...
	I1017 19:57:12.240953  586177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:12.241212  586177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	W1017 19:57:12.241357  586177 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21664-584308/.minikube/config/config.json: open /home/jenkins/minikube-integration/21664-584308/.minikube/config/config.json: no such file or directory
	I1017 19:57:12.241785  586177 out.go:368] Setting JSON to true
	I1017 19:57:12.242652  586177 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9578,"bootTime":1760721454,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 19:57:12.242730  586177 start.go:141] virtualization:  
	I1017 19:57:12.246955  586177 out.go:99] [download-only-011118] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1017 19:57:12.247184  586177 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball: no such file or directory
	I1017 19:57:12.247319  586177 notify.go:220] Checking for updates...
	I1017 19:57:12.251137  586177 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:57:12.254252  586177 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:57:12.257193  586177 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:57:12.260170  586177 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 19:57:12.263203  586177 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1017 19:57:12.268944  586177 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:57:12.269289  586177 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:57:12.300798  586177 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:57:12.300913  586177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:12.359447  586177 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 19:57:12.349470377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:12.359558  586177 docker.go:318] overlay module found
	I1017 19:57:12.362708  586177 out.go:99] Using the docker driver based on user configuration
	I1017 19:57:12.362764  586177 start.go:305] selected driver: docker
	I1017 19:57:12.362781  586177 start.go:925] validating driver "docker" against <nil>
	I1017 19:57:12.362908  586177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:12.424097  586177 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:63 SystemTime:2025-10-17 19:57:12.415216854 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:12.424263  586177 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:57:12.424604  586177 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1017 19:57:12.424759  586177 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:57:12.427867  586177 out.go:171] Using Docker driver with root privileges
	I1017 19:57:12.430883  586177 cni.go:84] Creating CNI manager for ""
	I1017 19:57:12.430983  586177 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1017 19:57:12.430997  586177 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1017 19:57:12.431089  586177 start.go:349] cluster config:
	{Name:download-only-011118 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-011118 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 19:57:12.434207  586177 out.go:99] Starting "download-only-011118" primary control-plane node in "download-only-011118" cluster
	I1017 19:57:12.434255  586177 cache.go:123] Beginning downloading kic base image for docker with crio
	I1017 19:57:12.437353  586177 out.go:99] Pulling base image v0.0.48-1760609789-21757 ...
	I1017 19:57:12.437490  586177 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:57:12.437559  586177 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local docker daemon
	I1017 19:57:12.457829  586177 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:57:12.458036  586177 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 in local cache directory
	I1017 19:57:12.458149  586177 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 to local cache
	I1017 19:57:12.487258  586177 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 19:57:12.487289  586177 cache.go:58] Caching tarball of preloaded images
	I1017 19:57:12.487477  586177 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:57:12.490880  586177 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1017 19:57:12.490915  586177 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1017 19:57:12.573848  586177 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1017 19:57:12.574033  586177 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1017 19:57:15.921648  586177 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1017 19:57:15.922291  586177 profile.go:143] Saving config to /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/download-only-011118/config.json ...
	I1017 19:57:15.922375  586177 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/download-only-011118/config.json: {Name:mk51fbf19d80970da7bcb166eb6f283a9e7e8aab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1017 19:57:15.922671  586177 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1017 19:57:15.922992  586177 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21664-584308/.minikube/cache/bin/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-011118 host does not exist
	  To start a cluster, run: "minikube start -p download-only-011118"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-011118
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.92s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-506703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-506703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.922572649s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.92s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1017 19:57:23.109437  586172 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1017 19:57:23.109478  586172 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21664-584308/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-506703
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-506703: exit status 85 (391.86221ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-011118 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-011118 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ delete  │ -p download-only-011118                                                                                                                                                   │ download-only-011118 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │ 17 Oct 25 19:57 UTC │
	│ start   │ -o=json --download-only -p download-only-506703 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-506703 │ jenkins │ v1.37.0 │ 17 Oct 25 19:57 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/17 19:57:18
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1017 19:57:18.228594  586379 out.go:360] Setting OutFile to fd 1 ...
	I1017 19:57:18.228732  586379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:18.228738  586379 out.go:374] Setting ErrFile to fd 2...
	I1017 19:57:18.228742  586379 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 19:57:18.229133  586379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 19:57:18.230094  586379 out.go:368] Setting JSON to true
	I1017 19:57:18.230933  586379 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9584,"bootTime":1760721454,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 19:57:18.231003  586379 start.go:141] virtualization:  
	I1017 19:57:18.234238  586379 out.go:99] [download-only-506703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 19:57:18.234547  586379 notify.go:220] Checking for updates...
	I1017 19:57:18.238061  586379 out.go:171] MINIKUBE_LOCATION=21664
	I1017 19:57:18.241018  586379 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 19:57:18.243893  586379 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 19:57:18.247191  586379 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 19:57:18.250205  586379 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1017 19:57:18.256029  586379 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1017 19:57:18.256355  586379 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 19:57:18.285009  586379 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 19:57:18.285116  586379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:18.341118  586379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-17 19:57:18.331868894 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:18.341231  586379 docker.go:318] overlay module found
	I1017 19:57:18.344317  586379 out.go:99] Using the docker driver based on user configuration
	I1017 19:57:18.344357  586379 start.go:305] selected driver: docker
	I1017 19:57:18.344367  586379 start.go:925] validating driver "docker" against <nil>
	I1017 19:57:18.344474  586379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 19:57:18.397967  586379 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:51 SystemTime:2025-10-17 19:57:18.388433609 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 19:57:18.398126  586379 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1017 19:57:18.398422  586379 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1017 19:57:18.398598  586379 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1017 19:57:18.401643  586379 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-506703 host does not exist
	  To start a cluster, run: "minikube start -p download-only-506703"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-506703
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1017 19:57:24.971895  586172 binary.go:77] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-465085 --alsologtostderr --binary-mirror http://127.0.0.1:39883 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-465085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-465085
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-948763
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-948763: exit status 85 (60.43115ms)

                                                
                                                
-- stdout --
	* Profile "addons-948763" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-948763"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-948763
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-948763: exit status 85 (77.843068ms)

                                                
                                                
-- stdout --
	* Profile "addons-948763" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-948763"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (175.47s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-948763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-948763 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m55.466438275s)
--- PASS: TestAddons/Setup (175.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-948763 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-948763 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-948763 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-948763 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [56a82d5e-5c40-41d6-a49a-ad08eaba86bc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [56a82d5e-5c40-41d6-a49a-ad08eaba86bc] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004156031s
addons_test.go:694: (dbg) Run:  kubectl --context addons-948763 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-948763 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-948763 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-948763 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.45s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-948763
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-948763: (12.151584587s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-948763
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-948763
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-948763
--- PASS: TestAddons/StoppedEnableDisable (12.45s)

                                                
                                    
x
+
TestCertOptions (34.21s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-656847 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-656847 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (30.931750459s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-656847 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-656847 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-656847 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-656847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-656847
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-656847: (2.133749991s)
--- PASS: TestCertOptions (34.21s)

                                                
                                    
x
+
TestCertExpiration (237.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-026952 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-026952 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (38.723530531s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-026952 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-026952 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (16.237390608s)
helpers_test.go:175: Cleaning up "cert-expiration-026952" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-026952
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-026952: (2.663950689s)
--- PASS: TestCertExpiration (237.63s)

                                                
                                    
x
+
TestForceSystemdFlag (51.09s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-758295 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-758295 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (47.876876302s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-758295 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-758295" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-758295
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-758295: (2.822921481s)
--- PASS: TestForceSystemdFlag (51.09s)

                                                
                                    
x
+
TestForceSystemdEnv (40.12s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-762621 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-762621 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.602003362s)
helpers_test.go:175: Cleaning up "force-systemd-env-762621" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-762621
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-762621: (2.513720474s)
--- PASS: TestForceSystemdEnv (40.12s)

                                                
                                    
x
+
TestErrorSpam/setup (33.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-013200 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-013200 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-013200 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-013200 --driver=docker  --container-runtime=crio: (33.917292732s)
--- PASS: TestErrorSpam/setup (33.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 start --dry-run
--- PASS: TestErrorSpam/start (0.83s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (6.26s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause: exit status 80 (2.306717633s)

                                                
                                                
-- stdout --
	* Pausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause: exit status 80 (2.30561505s)

                                                
                                                
-- stdout --
	* Pausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:24Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause: exit status 80 (1.644662264s)

                                                
                                                
-- stdout --
	* Pausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.26s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.03s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause: exit status 80 (1.909328789s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause: exit status 80 (1.859302871s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause: exit status 80 (2.262792571s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-013200 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-17T20:04:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_2.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.03s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 stop: (1.332903319s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-013200 --log_dir /tmp/nospam-013200 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21664-584308/.minikube/files/etc/test/nested/copy/586172/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.97s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1017 20:05:22.327266  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.333698  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.345087  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.366626  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.408083  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.489748  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.651307  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:22.973042  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:23.615211  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:24.896932  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:27.459741  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:32.581701  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:05:42.823973  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-787197 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.971582277s)
--- PASS: TestFunctional/serial/StartWithProxy (80.97s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1017 20:06:00.361456  586172 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --alsologtostderr -v=8
E1017 20:06:03.305367  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-787197 --alsologtostderr -v=8: (30.86323043s)
functional_test.go:678: soft start took 30.872365262s for "functional-787197" cluster.
I1017 20:06:31.225036  586172 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (30.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-787197 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:3.1: (1.172343756s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:3.3: (1.159996874s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 cache add registry.k8s.io/pause:latest: (1.083710523s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.42s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-787197 /tmp/TestFunctionalserialCacheCmdcacheadd_local780935802/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache add minikube-local-cache-test:functional-787197
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache delete minikube-local-cache-test:functional-787197
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-787197
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.322927ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 kubectl -- --context functional-787197 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-787197 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.7s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1017 20:06:44.266762  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-787197 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.702274162s)
functional_test.go:776: restart took 37.702371115s for "functional-787197" cluster.
I1017 20:07:16.227491  586172 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (37.70s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-787197 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 logs: (1.447304713s)
--- PASS: TestFunctional/serial/LogsCmd (1.45s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 logs --file /tmp/TestFunctionalserialLogsFileCmd2082164398/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 logs --file /tmp/TestFunctionalserialLogsFileCmd2082164398/001/logs.txt: (1.481206413s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.07s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-787197 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-787197
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-787197: exit status 115 (379.196365ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31636 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-787197 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 config get cpus: exit status 14 (85.155314ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 config get cpus: exit status 14 (76.928885ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787197 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-787197 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 612225: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.08s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787197 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.919718ms)

                                                
                                                
-- stdout --
	* [functional-787197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:17:51.302033  611936 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:17:51.302199  611936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:17:51.302210  611936 out.go:374] Setting ErrFile to fd 2...
	I1017 20:17:51.302216  611936 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:17:51.302573  611936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:17:51.303038  611936 out.go:368] Setting JSON to false
	I1017 20:17:51.303945  611936 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10817,"bootTime":1760721454,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:17:51.304032  611936 start.go:141] virtualization:  
	I1017 20:17:51.307246  611936 out.go:179] * [functional-787197] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:17:51.309303  611936 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:17:51.309447  611936 notify.go:220] Checking for updates...
	I1017 20:17:51.314863  611936 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:17:51.317733  611936 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:17:51.320615  611936 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:17:51.323357  611936 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:17:51.326156  611936 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:17:51.329330  611936 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:17:51.329897  611936 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:17:51.356748  611936 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:17:51.356875  611936 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:17:51.437836  611936 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:17:51.427918981 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:17:51.438224  611936 docker.go:318] overlay module found
	I1017 20:17:51.441433  611936 out.go:179] * Using the docker driver based on existing profile
	I1017 20:17:51.444177  611936 start.go:305] selected driver: docker
	I1017 20:17:51.444200  611936 start.go:925] validating driver "docker" against &{Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:17:51.444311  611936 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:17:51.447922  611936 out.go:203] 
	W1017 20:17:51.450721  611936 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1017 20:17:51.453503  611936 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-787197 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-787197 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (227.486301ms)

                                                
                                                
-- stdout --
	* [functional-787197] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:17:51.082894  611887 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:17:51.083217  611887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:17:51.083252  611887 out.go:374] Setting ErrFile to fd 2...
	I1017 20:17:51.083272  611887 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:17:51.084429  611887 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:17:51.084959  611887 out.go:368] Setting JSON to false
	I1017 20:17:51.085920  611887 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10817,"bootTime":1760721454,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:17:51.086031  611887 start.go:141] virtualization:  
	I1017 20:17:51.089952  611887 out.go:179] * [functional-787197] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1017 20:17:51.093990  611887 notify.go:220] Checking for updates...
	I1017 20:17:51.097731  611887 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:17:51.100812  611887 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:17:51.103771  611887 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:17:51.106744  611887 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:17:51.109641  611887 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:17:51.112601  611887 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:17:51.115963  611887 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:17:51.116571  611887 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:17:51.154252  611887 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:17:51.154442  611887 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:17:51.233337  611887 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:17:51.220675848 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:17:51.233444  611887 docker.go:318] overlay module found
	I1017 20:17:51.236428  611887 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1017 20:17:51.239443  611887 start.go:305] selected driver: docker
	I1017 20:17:51.239460  611887 start.go:925] validating driver "docker" against &{Name:functional-787197 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760609789-21757@sha256:9824b20f4774128fcb298ad0e6cac7649729886cfba9d444b2305c743a5044c6 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-787197 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1017 20:17:51.239564  611887 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:17:51.242794  611887 out.go:203] 
	W1017 20:17:51.245404  611887 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1017 20:17:51.247592  611887 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [7f4da848-4070-4e4c-ab32-a57a40c2a7be] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.066769604s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-787197 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-787197 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-787197 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-787197 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8993cd23-11c5-4428-a204-84099006cb92] Pending
helpers_test.go:352: "sp-pod" [8993cd23-11c5-4428-a204-84099006cb92] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8993cd23-11c5-4428-a204-84099006cb92] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00339054s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-787197 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-787197 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-787197 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [fd5681e4-9d94-4062-b564-5af55fb10271] Pending
helpers_test.go:352: "sp-pod" [fd5681e4-9d94-4062-b564-5af55fb10271] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [fd5681e4-9d94-4062-b564-5af55fb10271] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002899756s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-787197 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.95s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh -n functional-787197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cp functional-787197:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4179430158/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh -n functional-787197 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh -n functional-787197 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/586172/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /etc/test/nested/copy/586172/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/586172.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /etc/ssl/certs/586172.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/586172.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /usr/share/ca-certificates/586172.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/5861722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /etc/ssl/certs/5861722.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/5861722.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /usr/share/ca-certificates/5861722.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-787197 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "sudo systemctl is-active docker": exit status 1 (286.96576ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "sudo systemctl is-active containerd": exit status 1 (280.788822ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 608600: os: process already finished
helpers_test.go:519: unable to terminate pid 608388: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-787197 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [8224ffd7-2db3-4a1f-b72e-b83f6ae5dd28] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [8224ffd7-2db3-4a1f-b72e-b83f6ae5dd28] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003298979s
I1017 20:07:33.705073  586172 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-787197 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.58.157 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-787197 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "353.108501ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "64.433657ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "379.320169ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "55.612659ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdany-port565590225/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1760732258961455459" to /tmp/TestFunctionalparallelMountCmdany-port565590225/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1760732258961455459" to /tmp/TestFunctionalparallelMountCmdany-port565590225/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1760732258961455459" to /tmp/TestFunctionalparallelMountCmdany-port565590225/001/test-1760732258961455459
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (362.45914ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 20:17:39.325111  586172 retry.go:31] will retry after 654.303196ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 17 20:17 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 17 20:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 17 20:17 test-1760732258961455459
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh cat /mount-9p/test-1760732258961455459
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-787197 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [67aa21d3-05e2-42cd-94a6-b88db9ac82ff] Pending
helpers_test.go:352: "busybox-mount" [67aa21d3-05e2-42cd-94a6-b88db9ac82ff] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [67aa21d3-05e2-42cd-94a6-b88db9ac82ff] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [67aa21d3-05e2-42cd-94a6-b88db9ac82ff] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003811679s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-787197 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdany-port565590225/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdspecific-port78549191/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.035676ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 20:17:46.519302  586172 retry.go:31] will retry after 277.211823ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdspecific-port78549191/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "sudo umount -f /mount-9p": exit status 1 (286.100843ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-787197 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdspecific-port78549191/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T" /mount1: exit status 1 (600.980555ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1017 20:17:48.444495  586172 retry.go:31] will retry after 554.462836ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-787197 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-787197 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3836257224/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 service list: (1.363007549s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 service list -o json: (1.444219898s)
functional_test.go:1504: Took "1.444307612s" to run "out/minikube-linux-arm64 -p functional-787197 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 version -o=json --components: (1.28208635s)
--- PASS: TestFunctional/parallel/Version/components (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787197 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787197 image ls --format short --alsologtostderr:
I1017 20:18:08.264514  614649 out.go:360] Setting OutFile to fd 1 ...
I1017 20:18:08.264725  614649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.264752  614649 out.go:374] Setting ErrFile to fd 2...
I1017 20:18:08.264771  614649 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.265062  614649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
I1017 20:18:08.265792  614649 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.265976  614649 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.266536  614649 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
I1017 20:18:08.300295  614649 ssh_runner.go:195] Run: systemctl --version
I1017 20:18:08.300362  614649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
I1017 20:18:08.325162  614649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
I1017 20:18:08.429756  614649 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787197 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ docker.io/library/nginx                 │ latest             │ e35ad067421cc │ 184MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/library/nginx                 │ alpine             │ 9c92f55c0336c │ 54.7MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787197 image ls --format table --alsologtostderr:
I1017 20:18:08.588774  614725 out.go:360] Setting OutFile to fd 1 ...
I1017 20:18:08.588984  614725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.589010  614725 out.go:374] Setting ErrFile to fd 2...
I1017 20:18:08.589029  614725 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.590598  614725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
I1017 20:18:08.591722  614725 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.591910  614725 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.592421  614725 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
I1017 20:18:08.617729  614725 ssh_runner.go:195] Run: systemctl --version
I1017 20:18:08.617778  614725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
I1017 20:18:08.639363  614725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
I1017 20:18:08.746948  614725 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787197 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa","repoDigests":["docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0","docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54704654"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-
minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k
8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973
dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTa
gs":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"},{"id":"e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9","repoDigests":["docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6","docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a"],"repoTags":["docker.io/library/nginx:latest"],"size":"184136558"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e3993
10e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787197 image ls --format json --alsologtostderr:
I1017 20:18:08.563836  614720 out.go:360] Setting OutFile to fd 1 ...
I1017 20:18:08.564001  614720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.564008  614720 out.go:374] Setting ErrFile to fd 2...
I1017 20:18:08.564013  614720 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.564298  614720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
I1017 20:18:08.564934  614720 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.565057  614720 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.565495  614720 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
I1017 20:18:08.596354  614720 ssh_runner.go:195] Run: systemctl --version
I1017 20:18:08.596408  614720 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
I1017 20:18:08.615901  614720 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
I1017 20:18:08.721986  614720 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787197 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: e35ad067421ccda484ee30e4ccc8a38fa13f9a21dd8d356e495c2d3a1f0766e9
repoDigests:
- docker.io/library/nginx@sha256:3b7732505933ca591ce4a6d860cb713ad96a3176b82f7979a8dfa9973486a0d6
- docker.io/library/nginx@sha256:ac03974aaaeb5e3fbe2ab74d7f2badf1388596f6877cbacf78af3617addbba9a
repoTags:
- docker.io/library/nginx:latest
size: "184136558"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 9c92f55c0336c2597a5b458ba84a3fd242b209d8b5079443646a0d269df0d4aa
repoDigests:
- docker.io/library/nginx@sha256:5d9c9f5c85a351079cc9d2fae74be812ef134f21470926eb2afe8f33ff5859c0
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
repoTags:
- docker.io/library/nginx:alpine
size: "54704654"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787197 image ls --format yaml --alsologtostderr:
I1017 20:18:08.276896  614650 out.go:360] Setting OutFile to fd 1 ...
I1017 20:18:08.277256  614650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.277267  614650 out.go:374] Setting ErrFile to fd 2...
I1017 20:18:08.277272  614650 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:08.277574  614650 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
I1017 20:18:08.278265  614650 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.278397  614650 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:08.286996  614650 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
I1017 20:18:08.308698  614650 ssh_runner.go:195] Run: systemctl --version
I1017 20:18:08.308753  614650 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
I1017 20:18:08.341682  614650 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
I1017 20:18:08.453645  614650 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-787197 ssh pgrep buildkitd: exit status 1 (282.393471ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image build -t localhost/my-image:functional-787197 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-787197 image build -t localhost/my-image:functional-787197 testdata/build --alsologtostderr: (3.25053702s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-787197 image build -t localhost/my-image:functional-787197 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 731cc6544c1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-787197
--> 78e4486dce0
Successfully tagged localhost/my-image:functional-787197
78e4486dce02b4754ef2e1c8883a725dd2b7d09b1cfad468e1b428f6b26b1589
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-787197 image build -t localhost/my-image:functional-787197 testdata/build --alsologtostderr:
I1017 20:18:09.089878  614855 out.go:360] Setting OutFile to fd 1 ...
I1017 20:18:09.090975  614855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:09.091020  614855 out.go:374] Setting ErrFile to fd 2...
I1017 20:18:09.091040  614855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1017 20:18:09.091357  614855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
I1017 20:18:09.092063  614855 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:09.092748  614855 config.go:182] Loaded profile config "functional-787197": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1017 20:18:09.093291  614855 cli_runner.go:164] Run: docker container inspect functional-787197 --format={{.State.Status}}
I1017 20:18:09.110354  614855 ssh_runner.go:195] Run: systemctl --version
I1017 20:18:09.110417  614855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-787197
I1017 20:18:09.127761  614855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/functional-787197/id_rsa Username:docker}
I1017 20:18:09.229510  614855 build_images.go:161] Building image from path: /tmp/build.2512243807.tar
I1017 20:18:09.229580  614855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1017 20:18:09.237611  614855 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2512243807.tar
I1017 20:18:09.241488  614855 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2512243807.tar: stat -c "%s %y" /var/lib/minikube/build/build.2512243807.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2512243807.tar': No such file or directory
I1017 20:18:09.241517  614855 ssh_runner.go:362] scp /tmp/build.2512243807.tar --> /var/lib/minikube/build/build.2512243807.tar (3072 bytes)
I1017 20:18:09.260255  614855 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2512243807
I1017 20:18:09.267928  614855 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2512243807 -xf /var/lib/minikube/build/build.2512243807.tar
I1017 20:18:09.275645  614855 crio.go:315] Building image: /var/lib/minikube/build/build.2512243807
I1017 20:18:09.275719  614855 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-787197 /var/lib/minikube/build/build.2512243807 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1017 20:18:12.267465  614855 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-787197 /var/lib/minikube/build/build.2512243807 --cgroup-manager=cgroupfs: (2.991719214s)
I1017 20:18:12.267527  614855 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2512243807
I1017 20:18:12.275781  614855 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2512243807.tar
I1017 20:18:12.283685  614855 build_images.go:217] Built localhost/my-image:functional-787197 from /tmp/build.2512243807.tar
I1017 20:18:12.283714  614855 build_images.go:133] succeeded building to: functional-787197
I1017 20:18:12.283720  614855 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/10/17 20:18:02 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-787197
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image rm kicbase/echo-server:functional-787197 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-787197 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-787197
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-787197
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-787197
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (218.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1017 20:20:22.320632  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:21:45.392366  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m37.929724681s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (218.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 kubectl -- rollout status deployment/busybox: (4.52134293s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8kb7f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8llg5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-jw7vx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8kb7f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8llg5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-jw7vx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8kb7f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8llg5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-jw7vx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8kb7f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8kb7f -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8llg5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-8llg5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-jw7vx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 kubectl -- exec busybox-7b57f96db7-jw7vx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node add --alsologtostderr -v 5
E1017 20:22:25.234227  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.240653  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.252013  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.273377  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.314751  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.396280  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.557656  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:25.879146  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:26.520432  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:27.802772  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:30.365558  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:35.487784  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:22:45.729682  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 node add --alsologtostderr -v 5: (58.175582311s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: (1.04121539s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-858120 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.08350878s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 status --output json --alsologtostderr -v 5: (1.040900683s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp testdata/cp-test.txt ha-858120:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120_ha-858120-m02.txt
E1017 20:23:06.213181  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test_ha-858120_ha-858120-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120_ha-858120-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test_ha-858120_ha-858120-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120_ha-858120-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test_ha-858120_ha-858120-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp testdata/cp-test.txt ha-858120-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m02:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m02_ha-858120.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test_ha-858120-m02_ha-858120.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m02:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120-m02_ha-858120-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test_ha-858120-m02_ha-858120-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m02:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120-m02_ha-858120-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test_ha-858120-m02_ha-858120-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp testdata/cp-test.txt ha-858120-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m03_ha-858120.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m03:/home/docker/cp-test.txt ha-858120-m04:/home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test_ha-858120-m03_ha-858120-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp testdata/cp-test.txt ha-858120-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1236976773/001/cp-test_ha-858120-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120:/home/docker/cp-test_ha-858120-m04_ha-858120.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120 "sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m02:/home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m02 "sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 cp ha-858120-m04:/home/docker/cp-test.txt ha-858120-m03:/home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 ssh -n ha-858120-m03 "sudo cat /home/docker/cp-test_ha-858120-m04_ha-858120-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 node stop m02 --alsologtostderr -v 5: (12.098650096s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: exit status 7 (803.868181ms)

                                                
                                                
-- stdout --
	ha-858120
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858120-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858120-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-858120-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:23:35.996561  629603 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:23:35.996671  629603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:23:35.996681  629603 out.go:374] Setting ErrFile to fd 2...
	I1017 20:23:35.996687  629603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:23:35.996939  629603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:23:35.997118  629603 out.go:368] Setting JSON to false
	I1017 20:23:35.997152  629603 mustload.go:65] Loading cluster: ha-858120
	I1017 20:23:35.997246  629603 notify.go:220] Checking for updates...
	I1017 20:23:35.997547  629603 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:23:35.997564  629603 status.go:174] checking status of ha-858120 ...
	I1017 20:23:35.998100  629603 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:23:36.023507  629603 status.go:371] ha-858120 host status = "Running" (err=<nil>)
	I1017 20:23:36.023535  629603 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:23:36.023854  629603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120
	I1017 20:23:36.053337  629603 host.go:66] Checking if "ha-858120" exists ...
	I1017 20:23:36.053636  629603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:23:36.053689  629603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120
	I1017 20:23:36.072392  629603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120/id_rsa Username:docker}
	I1017 20:23:36.177055  629603 ssh_runner.go:195] Run: systemctl --version
	I1017 20:23:36.184224  629603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:23:36.198610  629603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:23:36.273052  629603 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-10-17 20:23:36.26361451 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:23:36.273807  629603 kubeconfig.go:125] found "ha-858120" server: "https://192.168.49.254:8443"
	I1017 20:23:36.273848  629603 api_server.go:166] Checking apiserver status ...
	I1017 20:23:36.273893  629603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:23:36.285585  629603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1245/cgroup
	I1017 20:23:36.295922  629603 api_server.go:182] apiserver freezer: "6:freezer:/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio/crio-aecf009a2f1b43a5a7cbd35cd90dbdeb14f358087002763b3f8cf337d16fa92d"
	I1017 20:23:36.296000  629603 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/0886947eb3347685154a1af99586a5ab07cfe59c9f68d1a1ae40d41e16f37196/crio/crio-aecf009a2f1b43a5a7cbd35cd90dbdeb14f358087002763b3f8cf337d16fa92d/freezer.state
	I1017 20:23:36.303584  629603 api_server.go:204] freezer state: "THAWED"
	I1017 20:23:36.303612  629603 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 20:23:36.311859  629603 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 20:23:36.311888  629603 status.go:463] ha-858120 apiserver status = Running (err=<nil>)
	I1017 20:23:36.311903  629603 status.go:176] ha-858120 status: &{Name:ha-858120 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:23:36.311927  629603 status.go:174] checking status of ha-858120-m02 ...
	I1017 20:23:36.312237  629603 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:23:36.329967  629603 status.go:371] ha-858120-m02 host status = "Stopped" (err=<nil>)
	I1017 20:23:36.329991  629603 status.go:384] host is not running, skipping remaining checks
	I1017 20:23:36.330002  629603 status.go:176] ha-858120-m02 status: &{Name:ha-858120-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:23:36.330022  629603 status.go:174] checking status of ha-858120-m03 ...
	I1017 20:23:36.330393  629603 cli_runner.go:164] Run: docker container inspect ha-858120-m03 --format={{.State.Status}}
	I1017 20:23:36.349085  629603 status.go:371] ha-858120-m03 host status = "Running" (err=<nil>)
	I1017 20:23:36.349110  629603 host.go:66] Checking if "ha-858120-m03" exists ...
	I1017 20:23:36.349421  629603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m03
	I1017 20:23:36.376433  629603 host.go:66] Checking if "ha-858120-m03" exists ...
	I1017 20:23:36.376749  629603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:23:36.376800  629603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m03
	I1017 20:23:36.396286  629603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m03/id_rsa Username:docker}
	I1017 20:23:36.500915  629603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:23:36.517801  629603 kubeconfig.go:125] found "ha-858120" server: "https://192.168.49.254:8443"
	I1017 20:23:36.517830  629603 api_server.go:166] Checking apiserver status ...
	I1017 20:23:36.517881  629603 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:23:36.530997  629603 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1197/cgroup
	I1017 20:23:36.544744  629603 api_server.go:182] apiserver freezer: "6:freezer:/docker/2d8a23efc0bea5995cdf3f0e31467c4c44ed535ec77d5a3ea3b3b76fd20d09d2/crio/crio-45cdb9aa3c9e4a9fca1a075e96812826b84b3bd2c1f22827b3d9c5d278cd8acd"
	I1017 20:23:36.544830  629603 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2d8a23efc0bea5995cdf3f0e31467c4c44ed535ec77d5a3ea3b3b76fd20d09d2/crio/crio-45cdb9aa3c9e4a9fca1a075e96812826b84b3bd2c1f22827b3d9c5d278cd8acd/freezer.state
	I1017 20:23:36.553460  629603 api_server.go:204] freezer state: "THAWED"
	I1017 20:23:36.553487  629603 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1017 20:23:36.561713  629603 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1017 20:23:36.561739  629603 status.go:463] ha-858120-m03 apiserver status = Running (err=<nil>)
	I1017 20:23:36.561749  629603 status.go:176] ha-858120-m03 status: &{Name:ha-858120-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:23:36.561778  629603 status.go:174] checking status of ha-858120-m04 ...
	I1017 20:23:36.562090  629603 cli_runner.go:164] Run: docker container inspect ha-858120-m04 --format={{.State.Status}}
	I1017 20:23:36.582305  629603 status.go:371] ha-858120-m04 host status = "Running" (err=<nil>)
	I1017 20:23:36.582330  629603 host.go:66] Checking if "ha-858120-m04" exists ...
	I1017 20:23:36.582748  629603 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-858120-m04
	I1017 20:23:36.601581  629603 host.go:66] Checking if "ha-858120-m04" exists ...
	I1017 20:23:36.601907  629603 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:23:36.601963  629603 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-858120-m04
	I1017 20:23:36.620504  629603 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/ha-858120-m04/id_rsa Username:docker}
	I1017 20:23:36.725449  629603 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:23:36.741335  629603 status.go:176] ha-858120-m04 status: &{Name:ha-858120-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node start m02 --alsologtostderr -v 5
E1017 20:23:47.175148  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 node start m02 --alsologtostderr -v 5: (30.729365494s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: (1.312968056s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.396622249s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 stop --alsologtostderr -v 5: (24.261689382s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: exit status 7 (116.99478ms)

                                                
                                                
-- stdout --
	ha-858120
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858120-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-858120-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:33:51.528224  641008 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:33:51.528343  641008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:33:51.528354  641008 out.go:374] Setting ErrFile to fd 2...
	I1017 20:33:51.528361  641008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:33:51.528612  641008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:33:51.528794  641008 out.go:368] Setting JSON to false
	I1017 20:33:51.528827  641008 mustload.go:65] Loading cluster: ha-858120
	I1017 20:33:51.528923  641008 notify.go:220] Checking for updates...
	I1017 20:33:51.529256  641008 config.go:182] Loaded profile config "ha-858120": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:33:51.529274  641008 status.go:174] checking status of ha-858120 ...
	I1017 20:33:51.529848  641008 cli_runner.go:164] Run: docker container inspect ha-858120 --format={{.State.Status}}
	I1017 20:33:51.548631  641008 status.go:371] ha-858120 host status = "Stopped" (err=<nil>)
	I1017 20:33:51.548657  641008 status.go:384] host is not running, skipping remaining checks
	I1017 20:33:51.548665  641008 status.go:176] ha-858120 status: &{Name:ha-858120 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:33:51.548696  641008 status.go:174] checking status of ha-858120-m02 ...
	I1017 20:33:51.548983  641008 cli_runner.go:164] Run: docker container inspect ha-858120-m02 --format={{.State.Status}}
	I1017 20:33:51.568796  641008 status.go:371] ha-858120-m02 host status = "Stopped" (err=<nil>)
	I1017 20:33:51.568817  641008 status.go:384] host is not running, skipping remaining checks
	I1017 20:33:51.568838  641008 status.go:176] ha-858120-m02 status: &{Name:ha-858120-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:33:51.568861  641008 status.go:174] checking status of ha-858120-m04 ...
	I1017 20:33:51.569162  641008 cli_runner.go:164] Run: docker container inspect ha-858120-m04 --format={{.State.Status}}
	I1017 20:33:51.596578  641008 status.go:371] ha-858120-m04 host status = "Stopped" (err=<nil>)
	I1017 20:33:51.596603  641008 status.go:384] host is not running, skipping remaining checks
	I1017 20:33:51.596610  641008 status.go:176] ha-858120-m04 status: &{Name:ha-858120-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (89.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.207308448s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (89.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 node add --control-plane --alsologtostderr -v 5
E1017 20:35:22.321442  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 node add --control-plane --alsologtostderr -v 5: (1m23.348252753s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-858120 status --alsologtostderr -v 5: (1.213680271s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.068773034s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.07s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.23s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-734339 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1017 20:37:25.234788  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-734339 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.22705549s)
--- PASS: TestJSONOutput/start/Command (79.23s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.84s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-734339 --output=json --user=testUser
E1017 20:38:25.394466  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-734339 --output=json --user=testUser: (5.838205411s)
--- PASS: TestJSONOutput/stop/Command (5.84s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-830811 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-830811 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.85698ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2fe65525-bfd5-448b-aea9-76a6a6faef0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-830811] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"917eb025-13e4-4a86-ba0d-01c4854df0be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"e7e78bf7-650f-4498-a059-b24d71b1634a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b589216a-deba-4e08-9033-531500394c71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig"}}
	{"specversion":"1.0","id":"1b78e1d4-8b82-4569-950e-b0ceb819e690","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube"}}
	{"specversion":"1.0","id":"9b332f4e-7150-42e0-8b07-d8585babe2f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"07f8e76f-396d-4962-886f-8ac6eeda357a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80906b70-e4c7-46a3-92ac-c7475eb76136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-830811" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-830811
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (64.95s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-992815 --network=
E1017 20:38:48.301182  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-992815 --network=: (1m2.689271955s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-992815" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-992815
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-992815: (2.231654326s)
--- PASS: TestKicCustomNetwork/create_custom_network (64.95s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (38.08s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-010946 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-010946 --network=bridge: (35.979618013s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-010946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-010946
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-010946: (2.069204975s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (38.08s)

                                                
                                    
x
+
TestKicExistingNetwork (36.8s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1017 20:40:15.837696  586172 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1017 20:40:15.854632  586172 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1017 20:40:15.855809  586172 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1017 20:40:15.855850  586172 cli_runner.go:164] Run: docker network inspect existing-network
W1017 20:40:15.871878  586172 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1017 20:40:15.871915  586172 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1017 20:40:15.871933  586172 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1017 20:40:15.872039  586172 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1017 20:40:15.888470  586172 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a78c784685bd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:de:cd:04:2f:ed:35} reservation:<nil>}
I1017 20:40:15.893630  586172 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1017 20:40:15.894013  586172 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40026cf9f0}
I1017 20:40:15.894546  586172 network_create.go:124] attempt to create docker network existing-network 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
I1017 20:40:15.894633  586172 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1017 20:40:15.953167  586172 network_create.go:108] docker network existing-network 192.168.67.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-192473 --network=existing-network
E1017 20:40:22.325538  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-192473 --network=existing-network: (34.478007875s)
helpers_test.go:175: Cleaning up "existing-network-192473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-192473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-192473: (2.173570241s)
I1017 20:40:52.620638  586172 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (36.80s)

                                                
                                    
x
+
TestKicCustomSubnet (40.09s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-732346 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-732346 --subnet=192.168.60.0/24: (37.770838861s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-732346 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-732346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-732346
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-732346: (2.288444348s)
--- PASS: TestKicCustomSubnet (40.09s)

                                                
                                    
x
+
TestKicStaticIP (36.67s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-413836 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-413836 --static-ip=192.168.200.200: (34.248004472s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-413836 ip
helpers_test.go:175: Cleaning up "static-ip-413836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-413836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-413836: (2.261696416s)
--- PASS: TestKicStaticIP (36.67s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (80.34s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-805384 --driver=docker  --container-runtime=crio
E1017 20:42:25.233778  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-805384 --driver=docker  --container-runtime=crio: (37.938565062s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-808059 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-808059 --driver=docker  --container-runtime=crio: (36.780252938s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-805384
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-808059
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-808059" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-808059
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-808059: (2.133792035s)
helpers_test.go:175: Cleaning up "first-805384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-805384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-805384: (2.04893257s)
--- PASS: TestMinikubeProfile (80.34s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-673934 --memory=3072 --mount-string /tmp/TestMountStartserial1497659931/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-673934 --memory=3072 --mount-string /tmp/TestMountStartserial1497659931/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.555147656s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-673934 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-676026 --memory=3072 --mount-string /tmp/TestMountStartserial1497659931/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-676026 --memory=3072 --mount-string /tmp/TestMountStartserial1497659931/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.829416299s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.83s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676026 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-673934 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-673934 --alsologtostderr -v=5: (1.706590308s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676026 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-676026
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-676026: (1.304633022s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.79s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-676026
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-676026: (7.790306881s)
--- PASS: TestMountStart/serial/RestartStopped (8.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-676026 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (137.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-653623 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1017 20:45:22.321380  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-653623 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m17.273484351s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (137.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-653623 -- rollout status deployment/busybox: (3.327634104s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-cm9g9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-sgrxx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-cm9g9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-sgrxx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-cm9g9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-sgrxx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-cm9g9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-cm9g9 -- sh -c "ping -c 1 192.168.58.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-sgrxx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-653623 -- exec busybox-7b57f96db7-sgrxx -- sh -c "ping -c 1 192.168.58.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-653623 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-653623 -v=5 --alsologtostderr: (57.249511079s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-653623 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --output json --alsologtostderr
E1017 20:47:25.233682  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp testdata/cp-test.txt multinode-653623:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961880775/001/cp-test_multinode-653623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623:/home/docker/cp-test.txt multinode-653623-m02:/home/docker/cp-test_multinode-653623_multinode-653623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test_multinode-653623_multinode-653623-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623:/home/docker/cp-test.txt multinode-653623-m03:/home/docker/cp-test_multinode-653623_multinode-653623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test_multinode-653623_multinode-653623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp testdata/cp-test.txt multinode-653623-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961880775/001/cp-test_multinode-653623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m02:/home/docker/cp-test.txt multinode-653623:/home/docker/cp-test_multinode-653623-m02_multinode-653623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test_multinode-653623-m02_multinode-653623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m02:/home/docker/cp-test.txt multinode-653623-m03:/home/docker/cp-test_multinode-653623-m02_multinode-653623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test_multinode-653623-m02_multinode-653623-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp testdata/cp-test.txt multinode-653623-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2961880775/001/cp-test_multinode-653623-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m03:/home/docker/cp-test.txt multinode-653623:/home/docker/cp-test_multinode-653623-m03_multinode-653623.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623 "sudo cat /home/docker/cp-test_multinode-653623-m03_multinode-653623.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 cp multinode-653623-m03:/home/docker/cp-test.txt multinode-653623-m02:/home/docker/cp-test_multinode-653623-m03_multinode-653623-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 ssh -n multinode-653623-m02 "sudo cat /home/docker/cp-test_multinode-653623-m03_multinode-653623-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.40s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-653623 node stop m03: (1.310301489s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-653623 status: exit status 7 (542.546104ms)

                                                
                                                
-- stdout --
	multinode-653623
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-653623-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-653623-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr: exit status 7 (539.060817ms)

                                                
                                                
-- stdout --
	multinode-653623
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-653623-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-653623-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:47:37.319269  691815 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:47:37.319479  691815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:47:37.319507  691815 out.go:374] Setting ErrFile to fd 2...
	I1017 20:47:37.319527  691815 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:47:37.319832  691815 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:47:37.320070  691815 out.go:368] Setting JSON to false
	I1017 20:47:37.320135  691815 mustload.go:65] Loading cluster: multinode-653623
	I1017 20:47:37.320169  691815 notify.go:220] Checking for updates...
	I1017 20:47:37.320609  691815 config.go:182] Loaded profile config "multinode-653623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:47:37.320645  691815 status.go:174] checking status of multinode-653623 ...
	I1017 20:47:37.321223  691815 cli_runner.go:164] Run: docker container inspect multinode-653623 --format={{.State.Status}}
	I1017 20:47:37.341730  691815 status.go:371] multinode-653623 host status = "Running" (err=<nil>)
	I1017 20:47:37.341752  691815 host.go:66] Checking if "multinode-653623" exists ...
	I1017 20:47:37.342187  691815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-653623
	I1017 20:47:37.368824  691815 host.go:66] Checking if "multinode-653623" exists ...
	I1017 20:47:37.369112  691815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:47:37.369159  691815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-653623
	I1017 20:47:37.388576  691815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33642 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/multinode-653623/id_rsa Username:docker}
	I1017 20:47:37.488838  691815 ssh_runner.go:195] Run: systemctl --version
	I1017 20:47:37.495485  691815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:47:37.508398  691815 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:47:37.572452  691815 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-10-17 20:47:37.562498474 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:47:37.572992  691815 kubeconfig.go:125] found "multinode-653623" server: "https://192.168.58.2:8443"
	I1017 20:47:37.573027  691815 api_server.go:166] Checking apiserver status ...
	I1017 20:47:37.573079  691815 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1017 20:47:37.584590  691815 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1222/cgroup
	I1017 20:47:37.593194  691815 api_server.go:182] apiserver freezer: "6:freezer:/docker/118e32d2b087f2c9209b6c08e9c6227aeb6271b944d94298113ca451277e0738/crio/crio-400a79c00425f8f50cc1cd75bea9cf6fb6527e61309ee5d3692e6bbe5eac448f"
	I1017 20:47:37.593264  691815 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/118e32d2b087f2c9209b6c08e9c6227aeb6271b944d94298113ca451277e0738/crio/crio-400a79c00425f8f50cc1cd75bea9cf6fb6527e61309ee5d3692e6bbe5eac448f/freezer.state
	I1017 20:47:37.601394  691815 api_server.go:204] freezer state: "THAWED"
	I1017 20:47:37.601435  691815 api_server.go:253] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I1017 20:47:37.609484  691815 api_server.go:279] https://192.168.58.2:8443/healthz returned 200:
	ok
	I1017 20:47:37.609514  691815 status.go:463] multinode-653623 apiserver status = Running (err=<nil>)
	I1017 20:47:37.609525  691815 status.go:176] multinode-653623 status: &{Name:multinode-653623 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:47:37.609577  691815 status.go:174] checking status of multinode-653623-m02 ...
	I1017 20:47:37.609909  691815 cli_runner.go:164] Run: docker container inspect multinode-653623-m02 --format={{.State.Status}}
	I1017 20:47:37.626819  691815 status.go:371] multinode-653623-m02 host status = "Running" (err=<nil>)
	I1017 20:47:37.626845  691815 host.go:66] Checking if "multinode-653623-m02" exists ...
	I1017 20:47:37.627234  691815 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-653623-m02
	I1017 20:47:37.644120  691815 host.go:66] Checking if "multinode-653623-m02" exists ...
	I1017 20:47:37.644435  691815 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1017 20:47:37.644486  691815 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-653623-m02
	I1017 20:47:37.661526  691815 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33647 SSHKeyPath:/home/jenkins/minikube-integration/21664-584308/.minikube/machines/multinode-653623-m02/id_rsa Username:docker}
	I1017 20:47:37.764132  691815 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1017 20:47:37.776785  691815 status.go:176] multinode-653623-m02 status: &{Name:multinode-653623-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:47:37.776819  691815 status.go:174] checking status of multinode-653623-m03 ...
	I1017 20:47:37.777128  691815 cli_runner.go:164] Run: docker container inspect multinode-653623-m03 --format={{.State.Status}}
	I1017 20:47:37.793983  691815 status.go:371] multinode-653623-m03 host status = "Stopped" (err=<nil>)
	I1017 20:47:37.794009  691815 status.go:384] host is not running, skipping remaining checks
	I1017 20:47:37.794017  691815 status.go:176] multinode-653623-m03 status: &{Name:multinode-653623-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-653623 node start m03 -v=5 --alsologtostderr: (7.868978426s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-653623
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-653623
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-653623: (25.011958297s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-653623 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-653623 --wait=true -v=5 --alsologtostderr: (54.211928416s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-653623
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-653623 node delete m03: (4.970041077s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.65s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-653623 stop: (23.868203157s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-653623 status: exit status 7 (96.901927ms)

                                                
                                                
-- stdout --
	multinode-653623
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-653623-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr: exit status 7 (111.539705ms)

                                                
                                                
-- stdout --
	multinode-653623
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-653623-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:49:35.472709  699601 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:49:35.472954  699601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:49:35.472981  699601 out.go:374] Setting ErrFile to fd 2...
	I1017 20:49:35.472989  699601 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:49:35.473418  699601 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:49:35.473719  699601 out.go:368] Setting JSON to false
	I1017 20:49:35.473771  699601 mustload.go:65] Loading cluster: multinode-653623
	I1017 20:49:35.474516  699601 config.go:182] Loaded profile config "multinode-653623": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:49:35.474540  699601 status.go:174] checking status of multinode-653623 ...
	I1017 20:49:35.474723  699601 notify.go:220] Checking for updates...
	I1017 20:49:35.475626  699601 cli_runner.go:164] Run: docker container inspect multinode-653623 --format={{.State.Status}}
	I1017 20:49:35.500867  699601 status.go:371] multinode-653623 host status = "Stopped" (err=<nil>)
	I1017 20:49:35.500888  699601 status.go:384] host is not running, skipping remaining checks
	I1017 20:49:35.500894  699601 status.go:176] multinode-653623 status: &{Name:multinode-653623 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1017 20:49:35.500929  699601 status.go:174] checking status of multinode-653623-m02 ...
	I1017 20:49:35.501227  699601 cli_runner.go:164] Run: docker container inspect multinode-653623-m02 --format={{.State.Status}}
	I1017 20:49:35.531584  699601 status.go:371] multinode-653623-m02 host status = "Stopped" (err=<nil>)
	I1017 20:49:35.531603  699601 status.go:384] host is not running, skipping remaining checks
	I1017 20:49:35.531667  699601 status.go:176] multinode-653623-m02 status: &{Name:multinode-653623-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.08s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-653623 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1017 20:50:22.320866  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-653623 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.613373052s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-653623 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.30s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-653623
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-653623-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-653623-m02 --driver=docker  --container-runtime=crio: exit status 14 (92.428605ms)

                                                
                                                
-- stdout --
	* [multinode-653623-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-653623-m02' is duplicated with machine name 'multinode-653623-m02' in profile 'multinode-653623'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-653623-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-653623-m03 --driver=docker  --container-runtime=crio: (34.689462108s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-653623
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-653623: exit status 80 (321.173872ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-653623 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-653623-m03 already exists in multinode-653623-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-653623-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-653623-m03: (2.116262271s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                    
x
+
TestPreload (129.96s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-776063 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-776063 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m2.968125541s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-776063 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-776063 image pull gcr.io/k8s-minikube/busybox: (2.290353372s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-776063
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-776063: (5.933876762s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-776063 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1017 20:52:25.234210  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-776063 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (56.056613183s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-776063 image list
helpers_test.go:175: Cleaning up "test-preload-776063" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-776063
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-776063: (2.469034685s)
--- PASS: TestPreload (129.96s)

                                                
                                    
x
+
TestScheduledStopUnix (105.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-809521 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-809521 --memory=3072 --driver=docker  --container-runtime=crio: (28.800426245s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-809521 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-809521 -n scheduled-stop-809521
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-809521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1017 20:53:47.748642  586172 retry.go:31] will retry after 141.522µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.749747  586172 retry.go:31] will retry after 94.585µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.751005  586172 retry.go:31] will retry after 265.762µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.752095  586172 retry.go:31] will retry after 202.449µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.753178  586172 retry.go:31] will retry after 569.13µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.754275  586172 retry.go:31] will retry after 533.906µs: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.755376  586172 retry.go:31] will retry after 1.676022ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.757559  586172 retry.go:31] will retry after 2.036741ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.759693  586172 retry.go:31] will retry after 3.816079ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.763906  586172 retry.go:31] will retry after 2.157654ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.767181  586172 retry.go:31] will retry after 3.653107ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.771445  586172 retry.go:31] will retry after 8.499759ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.782886  586172 retry.go:31] will retry after 7.974593ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.791170  586172 retry.go:31] will retry after 15.593471ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.807207  586172 retry.go:31] will retry after 22.618421ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
I1017 20:53:47.830471  586172 retry.go:31] will retry after 50.941463ms: open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/scheduled-stop-809521/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-809521 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-809521 -n scheduled-stop-809521
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-809521
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-809521 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-809521
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-809521: exit status 7 (73.452963ms)

                                                
                                                
-- stdout --
	scheduled-stop-809521
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-809521 -n scheduled-stop-809521
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-809521 -n scheduled-stop-809521: exit status 7 (71.211013ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-809521" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-809521
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-809521: (4.560705038s)
--- PASS: TestScheduledStopUnix (105.05s)

                                                
                                    
x
+
TestInsufficientStorage (13.97s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-079603 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1017 20:55:05.396393  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-079603 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.381044493s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a0cf8c9d-77fc-44b2-9f1f-d48b08794899","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-079603] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"83fd5607-3614-410c-b93e-93afa588e4e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21664"}}
	{"specversion":"1.0","id":"eaa80c84-c97d-4bab-97ff-e0697f8e8d47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fe9a7fe8-7c96-4fa1-8e53-fd53cfcf46dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig"}}
	{"specversion":"1.0","id":"1a5becce-65b5-4c3c-8326-02727c7ee81d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube"}}
	{"specversion":"1.0","id":"1bc31da8-80c4-4f69-a75d-3ee263ca42a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"eb2695b1-25f6-4be5-aeeb-57d3781e6a3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b2c78f00-39e6-4944-984d-c02aa5a2c8ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8895fe8b-cee1-4cb0-ba5c-f33bcf7bf9b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0c66305d-3f57-4c97-9a3c-d531287ada3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0ecc1de2-5dd8-4ad3-8d4a-609485b40b1e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"eab8d629-c8d3-493c-a08d-72adfa9cd3a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-079603\" primary control-plane node in \"insufficient-storage-079603\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"662d023d-361c-4a93-b82f-152f07cc2a2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760609789-21757 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d23e89c-c124-4696-8a16-2ab1b463bfb8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"b35934af-9b90-4f6c-a65d-e1f902980965","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-079603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-079603 --output=json --layout=cluster: exit status 7 (309.16485ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-079603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-079603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 20:55:15.124047  715729 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-079603" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-079603 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-079603 --output=json --layout=cluster: exit status 7 (302.571925ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-079603","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-079603","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1017 20:55:15.427876  715793 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-079603" does not appear in /home/jenkins/minikube-integration/21664-584308/kubeconfig
	E1017 20:55:15.437984  715793 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/insufficient-storage-079603/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-079603" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-079603
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-079603: (1.976321737s)
--- PASS: TestInsufficientStorage (13.97s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (60.53s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.51091477 start -p running-upgrade-206464 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.51091477 start -p running-upgrade-206464 --memory=3072 --vm-driver=docker  --container-runtime=crio: (36.349427132s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-206464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-206464 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.094459054s)
helpers_test.go:175: Cleaning up "running-upgrade-206464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-206464
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-206464: (2.004501732s)
--- PASS: TestRunningBinaryUpgrade (60.53s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.47s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.785120216s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-202932
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-202932: (1.361595628s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-202932 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-202932 status --format={{.Host}}: exit status 7 (69.249137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1017 21:00:22.321553  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m38.273650254s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-202932 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (126.272177ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-202932] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-202932
	    minikube start -p kubernetes-upgrade-202932 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2029322 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-202932 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-202932 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (35.359929739s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-202932" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-202932
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-202932: (2.36158052s)
--- PASS: TestKubernetesUpgrade (352.47s)

                                                
                                    
x
+
TestMissingContainerUpgrade (112.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2170055425 start -p missing-upgrade-861491 --memory=3072 --driver=docker  --container-runtime=crio
E1017 21:02:25.233568  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2170055425 start -p missing-upgrade-861491 --memory=3072 --driver=docker  --container-runtime=crio: (57.618962359s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-861491
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-861491
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-861491 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-861491 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.54621754s)
helpers_test.go:175: Cleaning up "missing-upgrade-861491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-861491
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-861491: (2.077716947s)
--- PASS: TestMissingContainerUpgrade (112.01s)

                                                
                                    
x
+
TestPause/serial/Start (92.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017644 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-017644 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m32.934019402s)
--- PASS: TestPause/serial/Start (92.93s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (109.482665ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-647470] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-647470 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1017 20:55:22.320928  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 20:55:28.302875  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-647470 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.850577699s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-647470 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (44.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.41107245s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-647470 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-647470 status -o json: exit status 2 (303.898601ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-647470","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-647470
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-647470: (2.017631349s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.73s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-647470 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (8.955964071s)
--- PASS: TestNoKubernetes/serial/Start (8.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-647470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-647470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (282.698761ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-647470
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-647470: (1.310172124s)
--- PASS: TestNoKubernetes/serial/Stop (1.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-647470 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-647470 --driver=docker  --container-runtime=crio: (7.693135152s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-647470 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-647470 "sudo systemctl is-active --quiet service kubelet": exit status 1 (299.487019ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-667721 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-667721 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (197.559739ms)

                                                
                                                
-- stdout --
	* [false-667721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21664
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1017 20:56:34.608350  725298 out.go:360] Setting OutFile to fd 1 ...
	I1017 20:56:34.608569  725298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:56:34.608602  725298 out.go:374] Setting ErrFile to fd 2...
	I1017 20:56:34.608622  725298 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1017 20:56:34.609000  725298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21664-584308/.minikube/bin
	I1017 20:56:34.609840  725298 out.go:368] Setting JSON to false
	I1017 20:56:34.610840  725298 start.go:131] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13140,"bootTime":1760721454,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1017 20:56:34.610936  725298 start.go:141] virtualization:  
	I1017 20:56:34.614569  725298 out.go:179] * [false-667721] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1017 20:56:34.617564  725298 out.go:179]   - MINIKUBE_LOCATION=21664
	I1017 20:56:34.617668  725298 notify.go:220] Checking for updates...
	I1017 20:56:34.623249  725298 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1017 20:56:34.626245  725298 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21664-584308/kubeconfig
	I1017 20:56:34.629248  725298 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21664-584308/.minikube
	I1017 20:56:34.632186  725298 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1017 20:56:34.635204  725298 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1017 20:56:34.638725  725298 config.go:182] Loaded profile config "pause-017644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1017 20:56:34.638865  725298 driver.go:421] Setting default libvirt URI to qemu:///system
	I1017 20:56:34.670893  725298 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I1017 20:56:34.671020  725298 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1017 20:56:34.739426  725298 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-10-17 20:56:34.729943 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1017 20:56:34.739533  725298 docker.go:318] overlay module found
	I1017 20:56:34.743070  725298 out.go:179] * Using the docker driver based on user configuration
	I1017 20:56:34.745933  725298 start.go:305] selected driver: docker
	I1017 20:56:34.745953  725298 start.go:925] validating driver "docker" against <nil>
	I1017 20:56:34.745968  725298 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1017 20:56:34.749494  725298 out.go:203] 
	W1017 20:56:34.752437  725298 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1017 20:56:34.755319  725298 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-667721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-017644
contexts:
- context:
cluster: pause-017644
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-017644
name: pause-017644
current-context: pause-017644
kind: Config
preferences: {}
users:
- name: pause-017644
user:
client-certificate: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.crt
client-key: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-667721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-667721"

                                                
                                                
----------------------- debugLogs end: false-667721 [took: 3.366614818s] --------------------------------
helpers_test.go:175: Cleaning up "false-667721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-667721
--- PASS: TestNetworkPlugins/group/false (3.73s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-017644 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-017644 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.469073694s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.1119997292 start -p stopped-upgrade-213683 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.1119997292 start -p stopped-upgrade-213683 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.68646519s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.1119997292 -p stopped-upgrade-213683 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.1119997292 -p stopped-upgrade-213683 stop: (1.321969745s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-213683 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1017 21:05:22.320564  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-213683 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.959384153s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (85.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m25.068610887s)
--- PASS: TestNetworkPlugins/group/auto/Start (85.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-213683
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-213683: (1.209738102s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (82.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m22.043539512s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (82.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-667721 "pgrep -a kubelet"
I1017 21:06:05.792835  586172 config.go:182] Loaded profile config "auto-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r4kw6" [75713b79-6e20-4762-baca-69e22d46976e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r4kw6" [75713b79-6e20-4762-baca-69e22d46976e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.003240169s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m2.996777389s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8jx2s" [546083f7-f00f-40af-b1e2-b8dcbfa39e5d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00463454s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-667721 "pgrep -a kubelet"
I1017 21:06:58.370496  586172 config.go:182] Loaded profile config "kindnet-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vcx44" [803b0736-1ad1-4c46-9e3a-621d11415392] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vcx44" [803b0736-1ad1-4c46-9e3a-621d11415392] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.003351287s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (73.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m13.684861837s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (73.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-k976s" [e53550c2-f50f-42bd-9665-c0810de50796] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003586185s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-667721 "pgrep -a kubelet"
I1017 21:07:48.844336  586172 config.go:182] Loaded profile config "calico-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mxft4" [d4ada8e9-c067-46ed-9ce1-f0f3da0cca98] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mxft4" [d4ada8e9-c067-46ed-9ce1-f0f3da0cca98] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.0208926s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (80.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m20.826590324s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (80.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-667721 "pgrep -a kubelet"
I1017 21:08:50.778973  586172 config.go:182] Loaded profile config "custom-flannel-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5cbr7" [316257f4-ac59-4602-909c-e12b4682c7c1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5cbr7" [316257f4-ac59-4602-909c-e12b4682c7c1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003776804s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m5.977929737s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-667721 "pgrep -a kubelet"
I1017 21:09:48.574914  586172 config.go:182] Loaded profile config "enable-default-cni-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8h5cb" [c9c36b07-db3a-4159-9c0d-1ebbe0b3a259] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8h5cb" [c9c36b07-db3a-4159-9c0d-1ebbe0b3a259] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004034413s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-667721 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m16.836733005s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-vf8pr" [c4b9d939-c8ae-44cf-844f-d2e22cd615a3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004008367s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-667721 "pgrep -a kubelet"
I1017 21:10:36.612670  586172 config.go:182] Loaded profile config "flannel-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bq6mt" [a6b3103f-a620-421b-abd8-498b388702f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bq6mt" [a6b3103f-a620-421b-abd8-498b388702f8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003925686s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-667721 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (70.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1017 21:11:16.382768  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:26.624171  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m10.933910594s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (70.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-667721 "pgrep -a kubelet"
I1017 21:11:39.892775  586172 config.go:182] Loaded profile config "bridge-667721": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-667721 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xpwgp" [a7e86aa4-f238-4c0e-bcd1-281dd104bfbc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1017 21:11:45.399017  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:47.105633  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xpwgp" [a7e86aa4-f238-4c0e-bcd1-281dd104bfbc] Running
E1017 21:11:52.020712  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.027211  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.038657  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.060199  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.101581  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.183668  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.345164  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:11:52.667258  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 13.004853568s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-667721 exec deployment/netcat -- nslookup kubernetes.default
E1017 21:11:53.308924  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-667721 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)
E1017 21:18:14.067094  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m10.230072887s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-521710 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [67434d41-b1c0-448a-865e-0a81da0dde6b] Pending
helpers_test.go:352: "busybox" [67434d41-b1c0-448a-865e-0a81da0dde6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1017 21:12:25.234023  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/functional-787197/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [67434d41-b1c0-448a-865e-0a81da0dde6b] Running
E1017 21:12:28.067689  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004198661s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-521710 exec busybox -- /bin/sh -c "ulimit -n"
E1017 21:12:32.998154  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-521710 --alsologtostderr -v=3
E1017 21:12:42.467034  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.473431  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.484836  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.506431  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.547866  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.629245  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:42.791009  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:43.112704  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:43.754840  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:45.041010  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:12:47.608328  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-521710 --alsologtostderr -v=3: (13.444069823s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.44s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710: exit status 7 (88.956286ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-521710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (58.56s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1017 21:12:52.730409  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:02.972697  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:13.959922  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:23.455007  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-521710 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (58.20820762s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-521710 -n old-k8s-version-521710
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (58.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-820018 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0ef24a65-39ad-473e-95c1-3c893463f1c4] Pending
helpers_test.go:352: "busybox" [0ef24a65-39ad-473e-95c1-3c893463f1c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0ef24a65-39ad-473e-95c1-3c893463f1c4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003605904s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-820018 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-820018 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-820018 --alsologtostderr -v=3: (12.080534119s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-66tmt" [155b15b2-18b2-41fd-b2a9-3cff308a5a6d] Running
E1017 21:13:49.989722  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:51.057468  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:51.063847  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:51.075231  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:51.097489  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003178301s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018
E1017 21:13:51.139792  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018: exit status 7 (85.562993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-820018 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1017 21:13:51.221312  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:13:51.383532  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:51.705218  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:52.347089  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:13:53.629079  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-820018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.148043031s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-820018 -n no-preload-820018
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-66tmt" [155b15b2-18b2-41fd-b2a9-3cff308a5a6d] Running
E1017 21:13:56.191270  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004453021s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-521710 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-521710 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (94.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:14:32.036049  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:35.881242  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m34.186343317s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (94.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zvlnk" [faf9c1d5-5c44-45c5-bc2f-b91224a64db1] Running
E1017 21:14:48.901303  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:48.907762  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:48.919482  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:48.940947  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003273437s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-zvlnk" [faf9c1d5-5c44-45c5-bc2f-b91224a64db1] Running
E1017 21:14:48.982854  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:49.064314  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:49.226280  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:49.547933  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:50.190154  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:14:51.472342  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002773696s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-820018 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
E1017 21:14:54.034263  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-820018 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:15:09.398153  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:12.999501  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:22.320884  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/addons-948763/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:26.338900  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:29.880068  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.206925  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.213371  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.224871  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.246347  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.288063  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.369524  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.531043  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:30.852965  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:31.494807  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:32.776467  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:35.338271  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:15:40.459931  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m22.267873974s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-629583 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f7049e1c-d0c1-4766-8d4b-56f73f9c82db] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f7049e1c-d0c1-4766-8d4b-56f73f9c82db] Running
E1017 21:15:50.701635  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00361668s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-629583 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-629583 --alsologtostderr -v=3
E1017 21:16:06.121034  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-629583 --alsologtostderr -v=3: (12.303250705s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583
E1017 21:16:10.841799  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583: exit status 7 (71.738672ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-629583 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:16:11.183269  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-629583 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.129210722s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-629583 -n embed-certs-629583
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [cd3171c1-800a-494a-9758-08a92ae10d3c] Pending
helpers_test.go:352: "busybox" [cd3171c1-800a-494a-9758-08a92ae10d3c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [cd3171c1-800a-494a-9758-08a92ae10d3c] Running
E1017 21:16:33.831743  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/auto-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:34.921396  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/custom-flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.005486079s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-332023 --alsologtostderr -v=3
E1017 21:16:40.272439  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.278712  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.290070  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.311437  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.353325  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.434710  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.596181  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:40.918534  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:41.560311  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:42.842530  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:45.404425  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:50.525860  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-332023 --alsologtostderr -v=3: (12.342081656s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023: exit status 7 (68.963742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-332023 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:16:52.019927  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/kindnet-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:16:52.145406  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/flannel-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:17:00.768440  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/bridge-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-332023 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.285440955s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-332023 -n default-k8s-diff-port-332023
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (57.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j59qn" [6f1d7780-2b4c-443d-929b-1de8ec40fe05] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003077939s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-j59qn" [6f1d7780-2b4c-443d-929b-1de8ec40fe05] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003858128s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-629583 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-629583 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:17:27.910775  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:17:32.763732  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/enable-default-cni-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:17:33.032478  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:17:42.466438  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1017 21:17:43.274650  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/old-k8s-version-521710/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (38.943607461s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vh6cd" [5a545536-453a-4470-8fae-376f46bef39c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.010824335s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vh6cd" [5a545536-453a-4470-8fae-376f46bef39c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003638195s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-332023 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-332023 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-229231 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-229231 --alsologtostderr -v=3: (1.531267677s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231: exit status 7 (119.884324ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-229231 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1017 21:18:10.180592  586172 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/calico-667721/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-229231 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (14.889019767s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-229231 -n newest-cni-229231
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-229231 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-785685 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-785685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-785685
--- SKIP: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-667721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-017644
contexts:
- context:
cluster: pause-017644
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-017644
name: pause-017644
current-context: pause-017644
kind: Config
preferences: {}
users:
- name: pause-017644
user:
client-certificate: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.crt
client-key: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-667721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-667721"

                                                
                                                
----------------------- debugLogs end: kubenet-667721 [took: 3.485795105s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-667721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-667721
--- SKIP: TestNetworkPlugins/group/kubenet (3.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-667721 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-667721" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21664-584308/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-017644
contexts:
- context:
cluster: pause-017644
extensions:
- extension:
last-update: Fri, 17 Oct 2025 20:56:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-017644
name: pause-017644
current-context: pause-017644
kind: Config
preferences: {}
users:
- name: pause-017644
user:
client-certificate: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.crt
client-key: /home/jenkins/minikube-integration/21664-584308/.minikube/profiles/pause-017644/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-667721

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-667721" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-667721"

                                                
                                                
----------------------- debugLogs end: cilium-667721 [took: 4.024327975s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-667721" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-667721
--- SKIP: TestNetworkPlugins/group/cilium (4.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-028827" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-028827
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard